2025-06-05 18:56:23.049768 | Job console starting 2025-06-05 18:56:23.063616 | Updating git repos 2025-06-05 18:56:23.117489 | Cloning repos into workspace 2025-06-05 18:56:23.285961 | Restoring repo states 2025-06-05 18:56:23.308366 | Merging changes 2025-06-05 18:56:23.308386 | Checking out repos 2025-06-05 18:56:23.604525 | Preparing playbooks 2025-06-05 18:56:24.252966 | Running Ansible setup 2025-06-05 18:56:28.579164 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-05 18:56:29.358073 | 2025-06-05 18:56:29.358243 | PLAY [Base pre] 2025-06-05 18:56:29.375700 | 2025-06-05 18:56:29.375844 | TASK [Setup log path fact] 2025-06-05 18:56:29.396436 | orchestrator | ok 2025-06-05 18:56:29.413871 | 2025-06-05 18:56:29.414014 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-05 18:56:29.466661 | orchestrator | ok 2025-06-05 18:56:29.488700 | 2025-06-05 18:56:29.488849 | TASK [emit-job-header : Print job information] 2025-06-05 18:56:29.545038 | # Job Information 2025-06-05 18:56:29.545241 | Ansible Version: 2.16.14 2025-06-05 18:56:29.545277 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-05 18:56:29.545311 | Pipeline: post 2025-06-05 18:56:29.545335 | Executor: 521e9411259a 2025-06-05 18:56:29.545356 | Triggered by: https://github.com/osism/testbed/commit/e9723c4314cbcc1a6601a2ba2399f4b60b0e1891 2025-06-05 18:56:29.545379 | Event ID: c2267106-423e-11f0-9928-cafe343ef454 2025-06-05 18:56:29.555527 | 2025-06-05 18:56:29.555667 | LOOP [emit-job-header : Print node information] 2025-06-05 18:56:29.693333 | orchestrator | ok: 2025-06-05 18:56:29.693640 | orchestrator | # Node Information 2025-06-05 18:56:29.693699 | orchestrator | Inventory Hostname: orchestrator 2025-06-05 18:56:29.693744 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-05 18:56:29.693781 | orchestrator | Username: zuul-testbed02 2025-06-05 18:56:29.693816 | orchestrator | Distro: Debian 12.11 2025-06-05 18:56:29.693857 | orchestrator | Provider: static-testbed 2025-06-05 18:56:29.693891 | orchestrator | Region: 2025-06-05 18:56:29.694936 | orchestrator | Label: testbed-orchestrator 2025-06-05 18:56:29.695014 | orchestrator | Product Name: OpenStack Nova 2025-06-05 18:56:29.695050 | orchestrator | Interface IP: 81.163.193.140 2025-06-05 18:56:29.711199 | 2025-06-05 18:56:29.711331 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-05 18:56:30.223845 | orchestrator -> localhost | changed 2025-06-05 18:56:30.235131 | 2025-06-05 18:56:30.235277 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-05 18:56:31.358215 | orchestrator -> localhost | changed 2025-06-05 18:56:31.382947 | 2025-06-05 18:56:31.383103 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-05 18:56:31.684408 | orchestrator -> localhost | ok 2025-06-05 18:56:31.697620 | 2025-06-05 18:56:31.697776 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-05 18:56:31.733467 | orchestrator | ok 2025-06-05 18:56:31.753371 | orchestrator | included: /var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-05 18:56:31.761674 | 2025-06-05 18:56:31.761780 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-05 18:56:33.683187 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-05 18:56:33.683705 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/110d1ba26f2f4a0a94540b539119b677_id_rsa 2025-06-05 18:56:33.683816 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/110d1ba26f2f4a0a94540b539119b677_id_rsa.pub 2025-06-05 18:56:33.683890 | orchestrator -> localhost | The key fingerprint is: 2025-06-05 18:56:33.683964 | orchestrator -> localhost | SHA256:9ErqyUjTMRXmDHleUyuDYTu/g+78YT+NW3xKbj19Sqg zuul-build-sshkey 2025-06-05 18:56:33.684026 | orchestrator -> localhost | The key's randomart image is: 2025-06-05 18:56:33.684107 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-05 18:56:33.684169 | orchestrator -> localhost | | ..= .. | 2025-06-05 18:56:33.684229 | orchestrator -> localhost | | .*.=o . | 2025-06-05 18:56:33.684285 | orchestrator -> localhost | | oO.o.. | 2025-06-05 18:56:33.684341 | orchestrator -> localhost | | o.+ o | 2025-06-05 18:56:33.684397 | orchestrator -> localhost | | o S o | 2025-06-05 18:56:33.684470 | orchestrator -> localhost | | . = o . o | 2025-06-05 18:56:33.684554 | orchestrator -> localhost | | o o o = .o=.o| 2025-06-05 18:56:33.684611 | orchestrator -> localhost | | . = + . =o=o++| 2025-06-05 18:56:33.684670 | orchestrator -> localhost | | . +.+.E o++.o| 2025-06-05 18:56:33.684728 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-05 18:56:33.684856 | orchestrator -> localhost | ok: Runtime: 0:00:01.393676 2025-06-05 18:56:33.701115 | 2025-06-05 18:56:33.701263 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-05 18:56:33.732234 | orchestrator | ok 2025-06-05 18:56:33.742350 | orchestrator | included: /var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-05 18:56:33.752682 | 2025-06-05 18:56:33.752786 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-05 18:56:33.776638 | orchestrator | skipping: Conditional result was False 2025-06-05 18:56:33.784649 | 2025-06-05 18:56:33.784757 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-05 18:56:34.399883 | orchestrator | changed 2025-06-05 18:56:34.408292 | 2025-06-05 18:56:34.408433 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-05 18:56:34.716045 | orchestrator | ok 2025-06-05 18:56:34.725457 | 2025-06-05 18:56:34.725703 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-05 18:56:35.156370 | orchestrator | ok 2025-06-05 18:56:35.162796 | 2025-06-05 18:56:35.162963 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-05 18:56:35.556929 | orchestrator | ok 2025-06-05 18:56:35.564859 | 2025-06-05 18:56:35.564985 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-05 18:56:35.589593 | orchestrator | skipping: Conditional result was False 2025-06-05 18:56:35.598695 | 2025-06-05 18:56:35.598854 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-05 18:56:36.089930 | orchestrator -> localhost | changed 2025-06-05 18:56:36.116231 | 2025-06-05 18:56:36.116399 | TASK [add-build-sshkey : Add back temp key] 2025-06-05 18:56:36.444267 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/110d1ba26f2f4a0a94540b539119b677_id_rsa (zuul-build-sshkey) 2025-06-05 18:56:36.444695 | orchestrator -> localhost | ok: Runtime: 0:00:00.009908 2025-06-05 18:56:36.455406 | 2025-06-05 18:56:36.455571 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-05 18:56:36.899103 | orchestrator | ok 2025-06-05 18:56:36.907108 | 2025-06-05 18:56:36.907253 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-05 18:56:36.942574 | orchestrator | skipping: Conditional result was False 2025-06-05 18:56:37.008722 | 2025-06-05 18:56:37.008861 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-05 18:56:37.426536 | orchestrator | ok 2025-06-05 18:56:37.446236 | 2025-06-05 18:56:37.446461 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-05 18:56:37.479860 | orchestrator | ok 2025-06-05 18:56:37.490426 | 2025-06-05 18:56:37.490626 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-05 18:56:37.771229 | orchestrator -> localhost | ok 2025-06-05 18:56:37.779060 | 2025-06-05 18:56:37.779186 | TASK [validate-host : Collect information about the host] 2025-06-05 18:56:38.960205 | orchestrator | ok 2025-06-05 18:56:38.990154 | 2025-06-05 18:56:38.990318 | TASK [validate-host : Sanitize hostname] 2025-06-05 18:56:39.071015 | orchestrator | ok 2025-06-05 18:56:39.080278 | 2025-06-05 18:56:39.080448 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-05 18:56:39.664474 | orchestrator -> localhost | changed 2025-06-05 18:56:39.680654 | 2025-06-05 18:56:39.680849 | TASK [validate-host : Collect information about zuul worker] 2025-06-05 18:56:40.125522 | orchestrator | ok 2025-06-05 18:56:40.134099 | 2025-06-05 18:56:40.134254 | TASK [validate-host : Write out all zuul information for each host] 2025-06-05 18:56:40.713434 | orchestrator -> localhost | changed 2025-06-05 18:56:40.724761 | 2025-06-05 18:56:40.724900 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-05 18:56:41.048551 | orchestrator | ok 2025-06-05 18:56:41.055228 | 2025-06-05 18:56:41.055354 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-05 18:57:12.793788 | orchestrator | changed: 2025-06-05 18:57:12.794093 | orchestrator | .d..t...... src/ 2025-06-05 18:57:12.794151 | orchestrator | .d..t...... src/github.com/ 2025-06-05 18:57:12.794191 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-05 18:57:12.794227 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-05 18:57:12.794260 | orchestrator | RedHat.yml 2025-06-05 18:57:12.809556 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-05 18:57:12.809578 | orchestrator | RedHat.yml 2025-06-05 18:57:12.809642 | orchestrator | = 1.53.0"... 2025-06-05 18:57:26.418179 | orchestrator | 18:57:26.417 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-05 18:57:27.441092 | orchestrator | 18:57:27.440 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-05 18:57:28.334410 | orchestrator | 18:57:28.334 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-05 18:57:29.590860 | orchestrator | 18:57:29.590 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-05 18:57:30.750929 | orchestrator | 18:57:30.750 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-05 18:57:31.722688 | orchestrator | 18:57:31.722 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-05 18:57:32.627821 | orchestrator | 18:57:32.627 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-05 18:57:32.627888 | orchestrator | 18:57:32.627 STDOUT terraform: Providers are signed by their developers. 2025-06-05 18:57:32.627894 | orchestrator | 18:57:32.627 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-05 18:57:32.627899 | orchestrator | 18:57:32.627 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-05 18:57:32.627947 | orchestrator | 18:57:32.627 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-05 18:57:32.627993 | orchestrator | 18:57:32.627 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-05 18:57:32.628053 | orchestrator | 18:57:32.627 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-05 18:57:32.628061 | orchestrator | 18:57:32.628 STDOUT terraform: you run "tofu init" in the future. 2025-06-05 18:57:32.630245 | orchestrator | 18:57:32.630 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-05 18:57:32.630351 | orchestrator | 18:57:32.630 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-05 18:57:32.630366 | orchestrator | 18:57:32.630 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-05 18:57:32.630377 | orchestrator | 18:57:32.630 STDOUT terraform: should now work. 2025-06-05 18:57:32.630390 | orchestrator | 18:57:32.630 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-05 18:57:32.630418 | orchestrator | 18:57:32.630 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-05 18:57:32.630466 | orchestrator | 18:57:32.630 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-05 18:57:32.838099 | orchestrator | 18:57:32.836 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-05 18:57:33.023052 | orchestrator | 18:57:33.022 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-05 18:57:33.023147 | orchestrator | 18:57:33.022 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-05 18:57:33.023159 | orchestrator | 18:57:33.022 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-05 18:57:33.023166 | orchestrator | 18:57:33.023 STDOUT terraform: for this configuration. 2025-06-05 18:57:33.225836 | orchestrator | 18:57:33.225 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-05 18:57:33.336691 | orchestrator | 18:57:33.336 STDOUT terraform: ci.auto.tfvars 2025-06-05 18:57:33.339649 | orchestrator | 18:57:33.339 STDOUT terraform: default_custom.tf 2025-06-05 18:57:33.520479 | orchestrator | 18:57:33.517 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed02/terraform` instead. 2025-06-05 18:57:34.505618 | orchestrator | 18:57:34.505 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-05 18:57:35.087781 | orchestrator | 18:57:35.087 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-05 18:57:35.329887 | orchestrator | 18:57:35.327 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-05 18:57:35.329982 | orchestrator | 18:57:35.327 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-05 18:57:35.329997 | orchestrator | 18:57:35.327 STDOUT terraform:  + create 2025-06-05 18:57:35.330009 | orchestrator | 18:57:35.327 STDOUT terraform:  <= read (data resources) 2025-06-05 18:57:35.330050 | orchestrator | 18:57:35.327 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-05 18:57:35.330063 | orchestrator | 18:57:35.327 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-05 18:57:35.330074 | orchestrator | 18:57:35.327 STDOUT terraform:  # (config refers to values not yet known) 2025-06-05 18:57:35.330084 | orchestrator | 18:57:35.327 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-05 18:57:35.330094 | orchestrator | 18:57:35.327 STDOUT terraform:  + checksum = (known after apply) 2025-06-05 18:57:35.330104 | orchestrator | 18:57:35.327 STDOUT terraform:  + created_at = (known after apply) 2025-06-05 18:57:35.330113 | orchestrator | 18:57:35.327 STDOUT terraform:  + file = (known after apply) 2025-06-05 18:57:35.330123 | orchestrator | 18:57:35.327 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.330133 | orchestrator | 18:57:35.327 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.330143 | orchestrator | 18:57:35.327 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-05 18:57:35.330152 | orchestrator | 18:57:35.327 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-05 18:57:35.330162 | orchestrator | 18:57:35.327 STDOUT terraform:  + most_recent = true 2025-06-05 18:57:35.330193 | orchestrator | 18:57:35.327 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.330203 | orchestrator | 18:57:35.327 STDOUT terraform:  + protected = (known after apply) 2025-06-05 18:57:35.330212 | orchestrator | 18:57:35.328 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.330222 | orchestrator | 18:57:35.328 STDOUT terraform:  + schema = (known after apply) 2025-06-05 18:57:35.330231 | orchestrator | 18:57:35.328 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-05 18:57:35.330241 | orchestrator | 18:57:35.328 STDOUT terraform:  + tags = (known after apply) 2025-06-05 18:57:35.330251 | orchestrator | 18:57:35.328 STDOUT terraform:  + updated_at = (known after apply) 2025-06-05 18:57:35.330261 | orchestrator | 18:57:35.328 STDOUT terraform:  } 2025-06-05 18:57:35.330271 | orchestrator | 18:57:35.328 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-05 18:57:35.330365 | orchestrator | 18:57:35.328 STDOUT terraform:  # (config refers to values not yet known) 2025-06-05 18:57:35.330379 | orchestrator | 18:57:35.328 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-05 18:57:35.330389 | orchestrator | 18:57:35.328 STDOUT terraform:  + checksum = (known after apply) 2025-06-05 18:57:35.330399 | orchestrator | 18:57:35.328 STDOUT terraform:  + created_at = (known after apply) 2025-06-05 18:57:35.330408 | orchestrator | 18:57:35.328 STDOUT terraform:  + file = (known after apply) 2025-06-05 18:57:35.330418 | orchestrator | 18:57:35.328 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.330428 | orchestrator | 18:57:35.328 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.330437 | orchestrator | 18:57:35.328 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-05 18:57:35.330446 | orchestrator | 18:57:35.328 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-05 18:57:35.330456 | orchestrator | 18:57:35.328 STDOUT terraform:  + most_recent = true 2025-06-05 18:57:35.330465 | orchestrator | 18:57:35.328 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.330475 | orchestrator | 18:57:35.328 STDOUT terraform:  + protected = (known after apply) 2025-06-05 18:57:35.330484 | orchestrator | 18:57:35.328 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.330517 | orchestrator | 18:57:35.328 STDOUT terraform:  + schema = (known after apply) 2025-06-05 18:57:35.330534 | orchestrator | 18:57:35.328 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-05 18:57:35.330544 | orchestrator | 18:57:35.328 STDOUT terraform:  + tags = (known after apply) 2025-06-05 18:57:35.330553 | orchestrator | 18:57:35.328 STDOUT terraform:  + updated_at = (known after apply) 2025-06-05 18:57:35.330563 | orchestrator | 18:57:35.328 STDOUT terraform:  } 2025-06-05 18:57:35.330572 | orchestrator | 18:57:35.328 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-05 18:57:35.330582 | orchestrator | 18:57:35.328 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-05 18:57:35.330591 | orchestrator | 18:57:35.328 STDOUT terraform:  + content = (known after apply) 2025-06-05 18:57:35.330601 | orchestrator | 18:57:35.328 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-05 18:57:35.330618 | orchestrator | 18:57:35.328 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-05 18:57:35.330628 | orchestrator | 18:57:35.328 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-05 18:57:35.330637 | orchestrator | 18:57:35.328 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-05 18:57:35.330647 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-05 18:57:35.330657 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-05 18:57:35.330666 | orchestrator | 18:57:35.329 STDOUT terraform:  + directory_permission = "0777" 2025-06-05 18:57:35.330676 | orchestrator | 18:57:35.329 STDOUT terraform:  + file_permission = "0644" 2025-06-05 18:57:35.330686 | orchestrator | 18:57:35.329 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-05 18:57:35.330695 | orchestrator | 18:57:35.329 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.330705 | orchestrator | 18:57:35.329 STDOUT terraform:  } 2025-06-05 18:57:35.330714 | orchestrator | 18:57:35.329 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-05 18:57:35.330724 | orchestrator | 18:57:35.329 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-05 18:57:35.330733 | orchestrator | 18:57:35.329 STDOUT terraform:  + content = (known after apply) 2025-06-05 18:57:35.330742 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-05 18:57:35.330752 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-05 18:57:35.330761 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-05 18:57:35.330771 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-05 18:57:35.330780 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-05 18:57:35.330790 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-05 18:57:35.330799 | orchestrator | 18:57:35.329 STDOUT terraform:  + directory_permission = "0777" 2025-06-05 18:57:35.330808 | orchestrator | 18:57:35.329 STDOUT terraform:  + file_permission = "0644" 2025-06-05 18:57:35.330818 | orchestrator | 18:57:35.329 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-05 18:57:35.330827 | orchestrator | 18:57:35.329 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.330837 | orchestrator | 18:57:35.329 STDOUT terraform:  } 2025-06-05 18:57:35.330846 | orchestrator | 18:57:35.329 STDOUT terraform:  # local_file.inventory will be created 2025-06-05 18:57:35.330855 | orchestrator | 18:57:35.329 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-05 18:57:35.330865 | orchestrator | 18:57:35.329 STDOUT terraform:  + content = (known after apply) 2025-06-05 18:57:35.330874 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-05 18:57:35.330883 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-05 18:57:35.330910 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-05 18:57:35.330926 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-05 18:57:35.330935 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-05 18:57:35.330945 | orchestrator | 18:57:35.329 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-05 18:57:35.330954 | orchestrator | 18:57:35.330 STDOUT terraform:  + directory_permission = "0777" 2025-06-05 18:57:35.330964 | orchestrator | 18:57:35.330 STDOUT terraform:  + file_permission = "0644" 2025-06-05 18:57:35.330973 | orchestrator | 18:57:35.330 STDOUT terraform:  + filename = "inventory.ci" 2025-06-05 18:57:35.330983 | orchestrator | 18:57:35.330 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.330992 | orchestrator | 18:57:35.330 STDOUT terraform:  } 2025-06-05 18:57:35.331002 | orchestrator | 18:57:35.330 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-05 18:57:35.331011 | orchestrator | 18:57:35.330 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-05 18:57:35.331021 | orchestrator | 18:57:35.330 STDOUT terraform:  + content = (sensitive value) 2025-06-05 18:57:35.331030 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-05 18:57:35.331040 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-05 18:57:35.331050 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-05 18:57:35.331059 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-05 18:57:35.331069 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-05 18:57:35.331078 | orchestrator | 18:57:35.330 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-05 18:57:35.331088 | orchestrator | 18:57:35.330 STDOUT terraform:  + directory_permission = "0700" 2025-06-05 18:57:35.331097 | orchestrator | 18:57:35.330 STDOUT terraform:  + file_permission = "0600" 2025-06-05 18:57:35.331107 | orchestrator | 18:57:35.330 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-05 18:57:35.331116 | orchestrator | 18:57:35.330 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.331126 | orchestrator | 18:57:35.330 STDOUT terraform:  } 2025-06-05 18:57:35.331135 | orchestrator | 18:57:35.330 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-05 18:57:35.331145 | orchestrator | 18:57:35.330 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-05 18:57:35.331154 | orchestrator | 18:57:35.330 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.331164 | orchestrator | 18:57:35.330 STDOUT terraform:  } 2025-06-05 18:57:35.331174 | orchestrator | 18:57:35.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-05 18:57:35.331183 | orchestrator | 18:57:35.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-05 18:57:35.331193 | orchestrator | 18:57:35.330 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.331208 | orchestrator | 18:57:35.330 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.331218 | orchestrator | 18:57:35.330 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.331232 | orchestrator | 18:57:35.330 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.331241 | orchestrator | 18:57:35.330 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.331251 | orchestrator | 18:57:35.330 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-05 18:57:35.331264 | orchestrator | 18:57:35.330 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.331297 | orchestrator | 18:57:35.331 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.331316 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.331333 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.331350 | orchestrator | 18:57:35.331 STDOUT terraform:  } 2025-06-05 18:57:35.331360 | orchestrator | 18:57:35.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-05 18:57:35.331370 | orchestrator | 18:57:35.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.331379 | orchestrator | 18:57:35.331 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.331393 | orchestrator | 18:57:35.331 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.331402 | orchestrator | 18:57:35.331 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.331411 | orchestrator | 18:57:35.331 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.331421 | orchestrator | 18:57:35.331 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.331430 | orchestrator | 18:57:35.331 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-05 18:57:35.331443 | orchestrator | 18:57:35.331 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.331453 | orchestrator | 18:57:35.331 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.331466 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.331478 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.331491 | orchestrator | 18:57:35.331 STDOUT terraform:  } 2025-06-05 18:57:35.331567 | orchestrator | 18:57:35.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-05 18:57:35.331585 | orchestrator | 18:57:35.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.331650 | orchestrator | 18:57:35.331 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.331667 | orchestrator | 18:57:35.331 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.331680 | orchestrator | 18:57:35.331 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.331725 | orchestrator | 18:57:35.331 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.331747 | orchestrator | 18:57:35.331 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.331818 | orchestrator | 18:57:35.331 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-05 18:57:35.331834 | orchestrator | 18:57:35.331 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.331867 | orchestrator | 18:57:35.331 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.331881 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.331907 | orchestrator | 18:57:35.331 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.331921 | orchestrator | 18:57:35.331 STDOUT terraform:  } 2025-06-05 18:57:35.331968 | orchestrator | 18:57:35.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-05 18:57:35.332014 | orchestrator | 18:57:35.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.332050 | orchestrator | 18:57:35.332 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.332064 | orchestrator | 18:57:35.332 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.332108 | orchestrator | 18:57:35.332 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.332144 | orchestrator | 18:57:35.332 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.332185 | orchestrator | 18:57:35.332 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.332230 | orchestrator | 18:57:35.332 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-05 18:57:35.332266 | orchestrator | 18:57:35.332 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.332306 | orchestrator | 18:57:35.332 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.332320 | orchestrator | 18:57:35.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.332333 | orchestrator | 18:57:35.332 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.332345 | orchestrator | 18:57:35.332 STDOUT terraform:  } 2025-06-05 18:57:35.332396 | orchestrator | 18:57:35.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-05 18:57:35.332440 | orchestrator | 18:57:35.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.332478 | orchestrator | 18:57:35.332 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.332492 | orchestrator | 18:57:35.332 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.332537 | orchestrator | 18:57:35.332 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.332575 | orchestrator | 18:57:35.332 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.332613 | orchestrator | 18:57:35.332 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.332660 | orchestrator | 18:57:35.332 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-05 18:57:35.332696 | orchestrator | 18:57:35.332 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.332718 | orchestrator | 18:57:35.332 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.332731 | orchestrator | 18:57:35.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.332753 | orchestrator | 18:57:35.332 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.332767 | orchestrator | 18:57:35.332 STDOUT terraform:  } 2025-06-05 18:57:35.332815 | orchestrator | 18:57:35.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-05 18:57:35.332862 | orchestrator | 18:57:35.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.332903 | orchestrator | 18:57:35.332 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.332923 | orchestrator | 18:57:35.332 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.332956 | orchestrator | 18:57:35.332 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.332994 | orchestrator | 18:57:35.332 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.333042 | orchestrator | 18:57:35.332 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.333088 | orchestrator | 18:57:35.333 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-05 18:57:35.333124 | orchestrator | 18:57:35.333 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.333138 | orchestrator | 18:57:35.333 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.333165 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.333179 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.333191 | orchestrator | 18:57:35.333 STDOUT terraform:  } 2025-06-05 18:57:35.333244 | orchestrator | 18:57:35.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-05 18:57:35.333342 | orchestrator | 18:57:35.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-05 18:57:35.333357 | orchestrator | 18:57:35.333 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.333370 | orchestrator | 18:57:35.333 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.333390 | orchestrator | 18:57:35.333 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.333420 | orchestrator | 18:57:35.333 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.333457 | orchestrator | 18:57:35.333 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.333511 | orchestrator | 18:57:35.333 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-05 18:57:35.333546 | orchestrator | 18:57:35.333 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.333560 | orchestrator | 18:57:35.333 STDOUT terraform:  + size = 80 2025-06-05 18:57:35.333591 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.333605 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.333618 | orchestrator | 18:57:35.333 STDOUT terraform:  } 2025-06-05 18:57:35.333667 | orchestrator | 18:57:35.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-05 18:57:35.333711 | orchestrator | 18:57:35.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.333747 | orchestrator | 18:57:35.333 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.333774 | orchestrator | 18:57:35.333 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.333814 | orchestrator | 18:57:35.333 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.333851 | orchestrator | 18:57:35.333 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.333889 | orchestrator | 18:57:35.333 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-05 18:57:35.333925 | orchestrator | 18:57:35.333 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.333937 | orchestrator | 18:57:35.333 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.333967 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.333991 | orchestrator | 18:57:35.333 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.334003 | orchestrator | 18:57:35.333 STDOUT terraform:  } 2025-06-05 18:57:35.334112 | orchestrator | 18:57:35.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-05 18:57:35.334161 | orchestrator | 18:57:35.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.334193 | orchestrator | 18:57:35.334 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.334219 | orchestrator | 18:57:35.334 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.334257 | orchestrator | 18:57:35.334 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.334339 | orchestrator | 18:57:35.334 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.334373 | orchestrator | 18:57:35.334 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-05 18:57:35.334412 | orchestrator | 18:57:35.334 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.334435 | orchestrator | 18:57:35.334 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.334460 | orchestrator | 18:57:35.334 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.334487 | orchestrator | 18:57:35.334 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.334498 | orchestrator | 18:57:35.334 STDOUT terraform:  } 2025-06-05 18:57:35.334543 | orchestrator | 18:57:35.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-05 18:57:35.334586 | orchestrator | 18:57:35.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.334626 | orchestrator | 18:57:35.334 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.334652 | orchestrator | 18:57:35.334 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.334689 | orchestrator | 18:57:35.334 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.334725 | orchestrator | 18:57:35.334 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.334767 | orchestrator | 18:57:35.334 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-05 18:57:35.334809 | orchestrator | 18:57:35.334 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.334832 | orchestrator | 18:57:35.334 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.334857 | orchestrator | 18:57:35.334 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.334882 | orchestrator | 18:57:35.334 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.334894 | orchestrator | 18:57:35.334 STDOUT terraform:  } 2025-06-05 18:57:35.334938 | orchestrator | 18:57:35.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-05 18:57:35.334981 | orchestrator | 18:57:35.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.335018 | orchestrator | 18:57:35.334 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.335042 | orchestrator | 18:57:35.335 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.335079 | orchestrator | 18:57:35.335 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.335114 | orchestrator | 18:57:35.335 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.335155 | orchestrator | 18:57:35.335 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-05 18:57:35.335190 | orchestrator | 18:57:35.335 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.335203 | orchestrator | 18:57:35.335 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.335243 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.335254 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.335265 | orchestrator | 18:57:35.335 STDOUT terraform:  } 2025-06-05 18:57:35.335318 | orchestrator | 18:57:35.335 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-05 18:57:35.335361 | orchestrator | 18:57:35.335 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.335396 | orchestrator | 18:57:35.335 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.335421 | orchestrator | 18:57:35.335 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.335457 | orchestrator | 18:57:35.335 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.335493 | orchestrator | 18:57:35.335 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.335532 | orchestrator | 18:57:35.335 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-05 18:57:35.335568 | orchestrator | 18:57:35.335 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.335580 | orchestrator | 18:57:35.335 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.335610 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.335636 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.335653 | orchestrator | 18:57:35.335 STDOUT terraform:  } 2025-06-05 18:57:35.335691 | orchestrator | 18:57:35.335 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-05 18:57:35.335736 | orchestrator | 18:57:35.335 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.335771 | orchestrator | 18:57:35.335 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.335797 | orchestrator | 18:57:35.335 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.335833 | orchestrator | 18:57:35.335 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.335869 | orchestrator | 18:57:35.335 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.335908 | orchestrator | 18:57:35.335 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-05 18:57:35.335944 | orchestrator | 18:57:35.335 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.335965 | orchestrator | 18:57:35.335 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.335990 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.336014 | orchestrator | 18:57:35.335 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.336026 | orchestrator | 18:57:35.336 STDOUT terraform:  } 2025-06-05 18:57:35.336070 | orchestrator | 18:57:35.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-05 18:57:35.336112 | orchestrator | 18:57:35.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.336149 | orchestrator | 18:57:35.336 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.336173 | orchestrator | 18:57:35.336 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.336228 | orchestrator | 18:57:35.336 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.336312 | orchestrator | 18:57:35.336 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.336372 | orchestrator | 18:57:35.336 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-05 18:57:35.336428 | orchestrator | 18:57:35.336 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.336453 | orchestrator | 18:57:35.336 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.336484 | orchestrator | 18:57:35.336 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.336507 | orchestrator | 18:57:35.336 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.336518 | orchestrator | 18:57:35.336 STDOUT terraform:  } 2025-06-05 18:57:35.336563 | orchestrator | 18:57:35.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-05 18:57:35.336605 | orchestrator | 18:57:35.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.336642 | orchestrator | 18:57:35.336 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.336666 | orchestrator | 18:57:35.336 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.336704 | orchestrator | 18:57:35.336 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.336740 | orchestrator | 18:57:35.336 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.336825 | orchestrator | 18:57:35.336 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-05 18:57:35.336864 | orchestrator | 18:57:35.336 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.336887 | orchestrator | 18:57:35.336 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.336913 | orchestrator | 18:57:35.336 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.336939 | orchestrator | 18:57:35.336 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.336950 | orchestrator | 18:57:35.336 STDOUT terraform:  } 2025-06-05 18:57:35.336994 | orchestrator | 18:57:35.336 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-05 18:57:35.337038 | orchestrator | 18:57:35.336 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-05 18:57:35.337075 | orchestrator | 18:57:35.337 STDOUT terraform:  + attachment = (known after apply) 2025-06-05 18:57:35.337102 | orchestrator | 18:57:35.337 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.337139 | orchestrator | 18:57:35.337 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.337192 | orchestrator | 18:57:35.337 STDOUT terraform:  + metadata = (known after apply) 2025-06-05 18:57:35.337232 | orchestrator | 18:57:35.337 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-05 18:57:35.337268 | orchestrator | 18:57:35.337 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.337300 | orchestrator | 18:57:35.337 STDOUT terraform:  + size = 20 2025-06-05 18:57:35.337312 | orchestrator | 18:57:35.337 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-05 18:57:35.337342 | orchestrator | 18:57:35.337 STDOUT terraform:  + volume_type = "ssd" 2025-06-05 18:57:35.337354 | orchestrator | 18:57:35.337 STDOUT terraform:  } 2025-06-05 18:57:35.337401 | orchestrator | 18:57:35.337 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-05 18:57:35.337440 | orchestrator | 18:57:35.337 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-05 18:57:35.337475 | orchestrator | 18:57:35.337 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.337513 | orchestrator | 18:57:35.337 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.337556 | orchestrator | 18:57:35.337 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.337594 | orchestrator | 18:57:35.337 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.337607 | orchestrator | 18:57:35.337 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.337617 | orchestrator | 18:57:35.337 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.337660 | orchestrator | 18:57:35.337 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.337692 | orchestrator | 18:57:35.337 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.337711 | orchestrator | 18:57:35.337 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-05 18:57:35.337745 | orchestrator | 18:57:35.337 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.337777 | orchestrator | 18:57:35.337 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.337806 | orchestrator | 18:57:35.337 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.337853 | orchestrator | 18:57:35.337 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.337868 | orchestrator | 18:57:35.337 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.337904 | orchestrator | 18:57:35.337 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.337936 | orchestrator | 18:57:35.337 STDOUT terraform:  + name = "testbed-manager" 2025-06-05 18:57:35.337948 | orchestrator | 18:57:35.337 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.338009 | orchestrator | 18:57:35.337 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.338072 | orchestrator | 18:57:35.337 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.338084 | orchestrator | 18:57:35.338 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.338126 | orchestrator | 18:57:35.338 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.338162 | orchestrator | 18:57:35.338 STDOUT terraform:  + user_data = (known after apply) 2025-06-05 18:57:35.338175 | orchestrator | 18:57:35.338 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.338209 | orchestrator | 18:57:35.338 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.338222 | orchestrator | 18:57:35.338 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.338262 | orchestrator | 18:57:35.338 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.338346 | orchestrator | 18:57:35.338 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.338362 | orchestrator | 18:57:35.338 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.338374 | orchestrator | 18:57:35.338 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.338383 | orchestrator | 18:57:35.338 STDOUT terraform:  } 2025-06-05 18:57:35.338391 | orchestrator | 18:57:35.338 STDOUT terraform:  + network { 2025-06-05 18:57:35.338418 | orchestrator | 18:57:35.338 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.338427 | orchestrator | 18:57:35.338 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.338458 | orchestrator | 18:57:35.338 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.338490 | orchestrator | 18:57:35.338 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.338522 | orchestrator | 18:57:35.338 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.338555 | orchestrator | 18:57:35.338 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.338587 | orchestrator | 18:57:35.338 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.338622 | orchestrator | 18:57:35.338 STDOUT terraform:  } 2025-06-05 18:57:35.338632 | orchestrator | 18:57:35.338 STDOUT terraform:  } 2025-06-05 18:57:35.338642 | orchestrator | 18:57:35.338 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-05 18:57:35.338689 | orchestrator | 18:57:35.338 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.338723 | orchestrator | 18:57:35.338 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.338758 | orchestrator | 18:57:35.338 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.338792 | orchestrator | 18:57:35.338 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.338827 | orchestrator | 18:57:35.338 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.338855 | orchestrator | 18:57:35.338 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.338865 | orchestrator | 18:57:35.338 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.338907 | orchestrator | 18:57:35.338 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.338942 | orchestrator | 18:57:35.338 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.338971 | orchestrator | 18:57:35.338 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.339007 | orchestrator | 18:57:35.338 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.339035 | orchestrator | 18:57:35.338 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.339068 | orchestrator | 18:57:35.339 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.339103 | orchestrator | 18:57:35.339 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.339144 | orchestrator | 18:57:35.339 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.339160 | orchestrator | 18:57:35.339 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.339191 | orchestrator | 18:57:35.339 STDOUT terraform:  + name = "testbed-node-0" 2025-06-05 18:57:35.339205 | orchestrator | 18:57:35.339 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.339263 | orchestrator | 18:57:35.339 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.339407 | orchestrator | 18:57:35.339 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.339425 | orchestrator | 18:57:35.339 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.339437 | orchestrator | 18:57:35.339 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.339449 | orchestrator | 18:57:35.339 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.339462 | orchestrator | 18:57:35.339 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.339478 | orchestrator | 18:57:35.339 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.339490 | orchestrator | 18:57:35.339 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.339502 | orchestrator | 18:57:35.339 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.339530 | orchestrator | 18:57:35.339 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.339543 | orchestrator | 18:57:35.339 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.339559 | orchestrator | 18:57:35.339 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.339572 | orchestrator | 18:57:35.339 STDOUT terraform:  } 2025-06-05 18:57:35.339588 | orchestrator | 18:57:35.339 STDOUT terraform:  + network { 2025-06-05 18:57:35.339600 | orchestrator | 18:57:35.339 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.339616 | orchestrator | 18:57:35.339 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.339649 | orchestrator | 18:57:35.339 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.339684 | orchestrator | 18:57:35.339 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.339707 | orchestrator | 18:57:35.339 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.339741 | orchestrator | 18:57:35.339 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.339770 | orchestrator | 18:57:35.339 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.339780 | orchestrator | 18:57:35.339 STDOUT terraform:  } 2025-06-05 18:57:35.339790 | orchestrator | 18:57:35.339 STDOUT terraform:  } 2025-06-05 18:57:35.339827 | orchestrator | 18:57:35.339 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-05 18:57:35.339873 | orchestrator | 18:57:35.339 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.339908 | orchestrator | 18:57:35.339 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.339943 | orchestrator | 18:57:35.339 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.339978 | orchestrator | 18:57:35.339 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.340017 | orchestrator | 18:57:35.339 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.340043 | orchestrator | 18:57:35.340 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.340054 | orchestrator | 18:57:35.340 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.340093 | orchestrator | 18:57:35.340 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.340130 | orchestrator | 18:57:35.340 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.340174 | orchestrator | 18:57:35.340 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.340182 | orchestrator | 18:57:35.340 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.340216 | orchestrator | 18:57:35.340 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.340249 | orchestrator | 18:57:35.340 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.340300 | orchestrator | 18:57:35.340 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.340329 | orchestrator | 18:57:35.340 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.340360 | orchestrator | 18:57:35.340 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.340370 | orchestrator | 18:57:35.340 STDOUT terraform:  + name = "testbed-node-1" 2025-06-05 18:57:35.340399 | orchestrator | 18:57:35.340 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.340435 | orchestrator | 18:57:35.340 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.340470 | orchestrator | 18:57:35.340 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.340496 | orchestrator | 18:57:35.340 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.340529 | orchestrator | 18:57:35.340 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.340579 | orchestrator | 18:57:35.340 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.340590 | orchestrator | 18:57:35.340 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.340617 | orchestrator | 18:57:35.340 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.340645 | orchestrator | 18:57:35.340 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.340675 | orchestrator | 18:57:35.340 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.340714 | orchestrator | 18:57:35.340 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.340731 | orchestrator | 18:57:35.340 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.340770 | orchestrator | 18:57:35.340 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.340782 | orchestrator | 18:57:35.340 STDOUT terraform:  } 2025-06-05 18:57:35.340791 | orchestrator | 18:57:35.340 STDOUT terraform:  + network { 2025-06-05 18:57:35.340816 | orchestrator | 18:57:35.340 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.340845 | orchestrator | 18:57:35.340 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.340876 | orchestrator | 18:57:35.340 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.340915 | orchestrator | 18:57:35.340 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.340942 | orchestrator | 18:57:35.340 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.340972 | orchestrator | 18:57:35.340 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.341003 | orchestrator | 18:57:35.340 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.341014 | orchestrator | 18:57:35.340 STDOUT terraform:  } 2025-06-05 18:57:35.341023 | orchestrator | 18:57:35.341 STDOUT terraform:  } 2025-06-05 18:57:35.341066 | orchestrator | 18:57:35.341 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-05 18:57:35.341107 | orchestrator | 18:57:35.341 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.341144 | orchestrator | 18:57:35.341 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.341178 | orchestrator | 18:57:35.341 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.341215 | orchestrator | 18:57:35.341 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.341263 | orchestrator | 18:57:35.341 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.341319 | orchestrator | 18:57:35.341 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.341331 | orchestrator | 18:57:35.341 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.341340 | orchestrator | 18:57:35.341 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.341382 | orchestrator | 18:57:35.341 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.341410 | orchestrator | 18:57:35.341 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.341427 | orchestrator | 18:57:35.341 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.341465 | orchestrator | 18:57:35.341 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.341512 | orchestrator | 18:57:35.341 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.341538 | orchestrator | 18:57:35.341 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.341572 | orchestrator | 18:57:35.341 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.341598 | orchestrator | 18:57:35.341 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.341630 | orchestrator | 18:57:35.341 STDOUT terraform:  + name = "testbed-node-2" 2025-06-05 18:57:35.341656 | orchestrator | 18:57:35.341 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.341695 | orchestrator | 18:57:35.341 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.341730 | orchestrator | 18:57:35.341 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.341746 | orchestrator | 18:57:35.341 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.341784 | orchestrator | 18:57:35.341 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.341836 | orchestrator | 18:57:35.341 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.341845 | orchestrator | 18:57:35.341 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.341879 | orchestrator | 18:57:35.341 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.341901 | orchestrator | 18:57:35.341 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.341937 | orchestrator | 18:57:35.341 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.341961 | orchestrator | 18:57:35.341 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.341993 | orchestrator | 18:57:35.341 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.342076 | orchestrator | 18:57:35.341 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.342086 | orchestrator | 18:57:35.342 STDOUT terraform:  } 2025-06-05 18:57:35.342092 | orchestrator | 18:57:35.342 STDOUT terraform:  + network { 2025-06-05 18:57:35.342101 | orchestrator | 18:57:35.342 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.342115 | orchestrator | 18:57:35.342 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.342175 | orchestrator | 18:57:35.342 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.342225 | orchestrator | 18:57:35.342 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.342254 | orchestrator | 18:57:35.342 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.342320 | orchestrator | 18:57:35.342 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.342330 | orchestrator | 18:57:35.342 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.342339 | orchestrator | 18:57:35.342 STDOUT terraform:  } 2025-06-05 18:57:35.342347 | orchestrator | 18:57:35.342 STDOUT terraform:  } 2025-06-05 18:57:35.342402 | orchestrator | 18:57:35.342 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-05 18:57:35.342442 | orchestrator | 18:57:35.342 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.342488 | orchestrator | 18:57:35.342 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.342510 | orchestrator | 18:57:35.342 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.342576 | orchestrator | 18:57:35.342 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.342591 | orchestrator | 18:57:35.342 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.342606 | orchestrator | 18:57:35.342 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.342617 | orchestrator | 18:57:35.342 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.342685 | orchestrator | 18:57:35.342 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.342744 | orchestrator | 18:57:35.342 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.342758 | orchestrator | 18:57:35.342 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.342770 | orchestrator | 18:57:35.342 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.342831 | orchestrator | 18:57:35.342 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.342846 | orchestrator | 18:57:35.342 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.342875 | orchestrator | 18:57:35.342 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.342923 | orchestrator | 18:57:35.342 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.342937 | orchestrator | 18:57:35.342 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.342963 | orchestrator | 18:57:35.342 STDOUT terraform:  + name = "testbed-node-3" 2025-06-05 18:57:35.343008 | orchestrator | 18:57:35.342 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.343018 | orchestrator | 18:57:35.342 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.343055 | orchestrator | 18:57:35.343 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.343092 | orchestrator | 18:57:35.343 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.343118 | orchestrator | 18:57:35.343 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.343177 | orchestrator | 18:57:35.343 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.343185 | orchestrator | 18:57:35.343 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.343194 | orchestrator | 18:57:35.343 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.343224 | orchestrator | 18:57:35.343 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.343261 | orchestrator | 18:57:35.343 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.343297 | orchestrator | 18:57:35.343 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.343343 | orchestrator | 18:57:35.343 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.343363 | orchestrator | 18:57:35.343 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.343372 | orchestrator | 18:57:35.343 STDOUT terraform:  } 2025-06-05 18:57:35.343380 | orchestrator | 18:57:35.343 STDOUT terraform:  + network { 2025-06-05 18:57:35.343427 | orchestrator | 18:57:35.343 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.343436 | orchestrator | 18:57:35.343 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.343462 | orchestrator | 18:57:35.343 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.343511 | orchestrator | 18:57:35.343 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.343520 | orchestrator | 18:57:35.343 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.343554 | orchestrator | 18:57:35.343 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.343595 | orchestrator | 18:57:35.343 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.343603 | orchestrator | 18:57:35.343 STDOUT terraform:  } 2025-06-05 18:57:35.343612 | orchestrator | 18:57:35.343 STDOUT terraform:  } 2025-06-05 18:57:35.343651 | orchestrator | 18:57:35.343 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-05 18:57:35.343693 | orchestrator | 18:57:35.343 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.343729 | orchestrator | 18:57:35.343 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.343761 | orchestrator | 18:57:35.343 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.343794 | orchestrator | 18:57:35.343 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.343842 | orchestrator | 18:57:35.343 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.343852 | orchestrator | 18:57:35.343 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.343872 | orchestrator | 18:57:35.343 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.343925 | orchestrator | 18:57:35.343 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.343934 | orchestrator | 18:57:35.343 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.343968 | orchestrator | 18:57:35.343 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.344008 | orchestrator | 18:57:35.343 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.344018 | orchestrator | 18:57:35.343 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.344060 | orchestrator | 18:57:35.344 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.344090 | orchestrator | 18:57:35.344 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.344127 | orchestrator | 18:57:35.344 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.344170 | orchestrator | 18:57:35.344 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.344180 | orchestrator | 18:57:35.344 STDOUT terraform:  + name = "testbed-node-4" 2025-06-05 18:57:35.344204 | orchestrator | 18:57:35.344 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.344254 | orchestrator | 18:57:35.344 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.344287 | orchestrator | 18:57:35.344 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.344300 | orchestrator | 18:57:35.344 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.344339 | orchestrator | 18:57:35.344 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.344387 | orchestrator | 18:57:35.344 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.344419 | orchestrator | 18:57:35.344 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.344426 | orchestrator | 18:57:35.344 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.344447 | orchestrator | 18:57:35.344 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.344496 | orchestrator | 18:57:35.344 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.344511 | orchestrator | 18:57:35.344 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.344531 | orchestrator | 18:57:35.344 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.344581 | orchestrator | 18:57:35.344 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.344589 | orchestrator | 18:57:35.344 STDOUT terraform:  } 2025-06-05 18:57:35.344597 | orchestrator | 18:57:35.344 STDOUT terraform:  + network { 2025-06-05 18:57:35.344605 | orchestrator | 18:57:35.344 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.344659 | orchestrator | 18:57:35.344 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.344669 | orchestrator | 18:57:35.344 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.344697 | orchestrator | 18:57:35.344 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.344745 | orchestrator | 18:57:35.344 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.344754 | orchestrator | 18:57:35.344 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.344787 | orchestrator | 18:57:35.344 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.344801 | orchestrator | 18:57:35.344 STDOUT terraform:  } 2025-06-05 18:57:35.344832 | orchestrator | 18:57:35.344 STDOUT terraform:  } 2025-06-05 18:57:35.344841 | orchestrator | 18:57:35.344 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-05 18:57:35.344906 | orchestrator | 18:57:35.344 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-05 18:57:35.344915 | orchestrator | 18:57:35.344 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-05 18:57:35.344953 | orchestrator | 18:57:35.344 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-05 18:57:35.344990 | orchestrator | 18:57:35.344 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-05 18:57:35.345023 | orchestrator | 18:57:35.344 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.345067 | orchestrator | 18:57:35.345 STDOUT terraform:  + availability_zone = "nova" 2025-06-05 18:57:35.345076 | orchestrator | 18:57:35.345 STDOUT terraform:  + config_drive = true 2025-06-05 18:57:35.345100 | orchestrator | 18:57:35.345 STDOUT terraform:  + created = (known after apply) 2025-06-05 18:57:35.345151 | orchestrator | 18:57:35.345 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-05 18:57:35.345160 | orchestrator | 18:57:35.345 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-05 18:57:35.345186 | orchestrator | 18:57:35.345 STDOUT terraform:  + force_delete = false 2025-06-05 18:57:35.345235 | orchestrator | 18:57:35.345 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-05 18:57:35.345258 | orchestrator | 18:57:35.345 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.345348 | orchestrator | 18:57:35.345 STDOUT terraform:  + image_id = (known after apply) 2025-06-05 18:57:35.345358 | orchestrator | 18:57:35.345 STDOUT terraform:  + image_name = (known after apply) 2025-06-05 18:57:35.345367 | orchestrator | 18:57:35.345 STDOUT terraform:  + key_pair = "testbed" 2025-06-05 18:57:35.345396 | orchestrator | 18:57:35.345 STDOUT terraform:  + name = "testbed-node-5" 2025-06-05 18:57:35.345427 | orchestrator | 18:57:35.345 STDOUT terraform:  + power_state = "active" 2025-06-05 18:57:35.345452 | orchestrator | 18:57:35.345 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.345504 | orchestrator | 18:57:35.345 STDOUT terraform:  + security_groups = (known after apply) 2025-06-05 18:57:35.345512 | orchestrator | 18:57:35.345 STDOUT terraform:  + stop_before_destroy = false 2025-06-05 18:57:35.345541 | orchestrator | 18:57:35.345 STDOUT terraform:  + updated = (known after apply) 2025-06-05 18:57:35.345589 | orchestrator | 18:57:35.345 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-05 18:57:35.345598 | orchestrator | 18:57:35.345 STDOUT terraform:  + block_device { 2025-06-05 18:57:35.345626 | orchestrator | 18:57:35.345 STDOUT terraform:  + boot_index = 0 2025-06-05 18:57:35.345665 | orchestrator | 18:57:35.345 STDOUT terraform:  + delete_on_termination = false 2025-06-05 18:57:35.345685 | orchestrator | 18:57:35.345 STDOUT terraform:  + destination_type = "volume" 2025-06-05 18:57:35.345713 | orchestrator | 18:57:35.345 STDOUT terraform:  + multiattach = false 2025-06-05 18:57:35.345748 | orchestrator | 18:57:35.345 STDOUT terraform:  + source_type = "volume" 2025-06-05 18:57:35.345783 | orchestrator | 18:57:35.345 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.345791 | orchestrator | 18:57:35.345 STDOUT terraform:  } 2025-06-05 18:57:35.345825 | orchestrator | 18:57:35.345 STDOUT terraform:  + network { 2025-06-05 18:57:35.345832 | orchestrator | 18:57:35.345 STDOUT terraform:  + access_network = false 2025-06-05 18:57:35.345855 | orchestrator | 18:57:35.345 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-05 18:57:35.345904 | orchestrator | 18:57:35.345 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-05 18:57:35.345913 | orchestrator | 18:57:35.345 STDOUT terraform:  + mac = (known after apply) 2025-06-05 18:57:35.345949 | orchestrator | 18:57:35.345 STDOUT terraform:  + name = (known after apply) 2025-06-05 18:57:35.345983 | orchestrator | 18:57:35.345 STDOUT terraform:  + port = (known after apply) 2025-06-05 18:57:35.346048 | orchestrator | 18:57:35.345 STDOUT terraform:  + uuid = (known after apply) 2025-06-05 18:57:35.346059 | orchestrator | 18:57:35.346 STDOUT terraform:  } 2025-06-05 18:57:35.346064 | orchestrator | 18:57:35.346 STDOUT terraform:  } 2025-06-05 18:57:35.346083 | orchestrator | 18:57:35.346 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-05 18:57:35.346132 | orchestrator | 18:57:35.346 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-05 18:57:35.346140 | orchestrator | 18:57:35.346 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-05 18:57:35.346173 | orchestrator | 18:57:35.346 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.346212 | orchestrator | 18:57:35.346 STDOUT terraform:  + name = "testbed" 2025-06-05 18:57:35.346220 | orchestrator | 18:57:35.346 STDOUT terraform:  + private_key = (sensitive value) 2025-06-05 18:57:35.346247 | orchestrator | 18:57:35.346 STDOUT terraform:  + public_key = (known after apply) 2025-06-05 18:57:35.346310 | orchestrator | 18:57:35.346 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.346318 | orchestrator | 18:57:35.346 STDOUT terraform:  + user_id = (known after apply) 2025-06-05 18:57:35.346326 | orchestrator | 18:57:35.346 STDOUT terraform:  } 2025-06-05 18:57:35.346393 | orchestrator | 18:57:35.346 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-05 18:57:35.346421 | orchestrator | 18:57:35.346 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.346449 | orchestrator | 18:57:35.346 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.346502 | orchestrator | 18:57:35.346 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.346514 | orchestrator | 18:57:35.346 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.346526 | orchestrator | 18:57:35.346 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.346556 | orchestrator | 18:57:35.346 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.346577 | orchestrator | 18:57:35.346 STDOUT terraform:  } 2025-06-05 18:57:35.346614 | orchestrator | 18:57:35.346 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-05 18:57:35.346668 | orchestrator | 18:57:35.346 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.346706 | orchestrator | 18:57:35.346 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.346737 | orchestrator | 18:57:35.346 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.346766 | orchestrator | 18:57:35.346 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.346795 | orchestrator | 18:57:35.346 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.346823 | orchestrator | 18:57:35.346 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.346831 | orchestrator | 18:57:35.346 STDOUT terraform:  } 2025-06-05 18:57:35.346883 | orchestrator | 18:57:35.346 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-05 18:57:35.346931 | orchestrator | 18:57:35.346 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.346960 | orchestrator | 18:57:35.346 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.346991 | orchestrator | 18:57:35.346 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.347021 | orchestrator | 18:57:35.346 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.347048 | orchestrator | 18:57:35.347 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.347076 | orchestrator | 18:57:35.347 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.347085 | orchestrator | 18:57:35.347 STDOUT terraform:  } 2025-06-05 18:57:35.347136 | orchestrator | 18:57:35.347 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-05 18:57:35.347184 | orchestrator | 18:57:35.347 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.347213 | orchestrator | 18:57:35.347 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.347245 | orchestrator | 18:57:35.347 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.347264 | orchestrator | 18:57:35.347 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.347366 | orchestrator | 18:57:35.347 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.347394 | orchestrator | 18:57:35.347 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.347402 | orchestrator | 18:57:35.347 STDOUT terraform:  } 2025-06-05 18:57:35.347486 | orchestrator | 18:57:35.347 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-05 18:57:35.347526 | orchestrator | 18:57:35.347 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.347560 | orchestrator | 18:57:35.347 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.347594 | orchestrator | 18:57:35.347 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.347628 | orchestrator | 18:57:35.347 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.347662 | orchestrator | 18:57:35.347 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.347699 | orchestrator | 18:57:35.347 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.347708 | orchestrator | 18:57:35.347 STDOUT terraform:  } 2025-06-05 18:57:35.347768 | orchestrator | 18:57:35.347 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-05 18:57:35.347827 | orchestrator | 18:57:35.347 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.347862 | orchestrator | 18:57:35.347 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.347896 | orchestrator | 18:57:35.347 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.347950 | orchestrator | 18:57:35.347 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.347958 | orchestrator | 18:57:35.347 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.347995 | orchestrator | 18:57:35.347 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.348003 | orchestrator | 18:57:35.347 STDOUT terraform:  } 2025-06-05 18:57:35.348065 | orchestrator | 18:57:35.348 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-05 18:57:35.348122 | orchestrator | 18:57:35.348 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.348157 | orchestrator | 18:57:35.348 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.348192 | orchestrator | 18:57:35.348 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.348227 | orchestrator | 18:57:35.348 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.348261 | orchestrator | 18:57:35.348 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.348310 | orchestrator | 18:57:35.348 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.348319 | orchestrator | 18:57:35.348 STDOUT terraform:  } 2025-06-05 18:57:35.348380 | orchestrator | 18:57:35.348 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-05 18:57:35.348439 | orchestrator | 18:57:35.348 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.348474 | orchestrator | 18:57:35.348 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.348508 | orchestrator | 18:57:35.348 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.348542 | orchestrator | 18:57:35.348 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.348576 | orchestrator | 18:57:35.348 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.348608 | orchestrator | 18:57:35.348 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.348616 | orchestrator | 18:57:35.348 STDOUT terraform:  } 2025-06-05 18:57:35.348679 | orchestrator | 18:57:35.348 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-05 18:57:35.348736 | orchestrator | 18:57:35.348 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-05 18:57:35.348770 | orchestrator | 18:57:35.348 STDOUT terraform:  + device = (known after apply) 2025-06-05 18:57:35.348805 | orchestrator | 18:57:35.348 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.348839 | orchestrator | 18:57:35.348 STDOUT terraform:  + instance_id = (known after apply) 2025-06-05 18:57:35.348873 | orchestrator | 18:57:35.348 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.348908 | orchestrator | 18:57:35.348 STDOUT terraform:  + volume_id = (known after apply) 2025-06-05 18:57:35.348915 | orchestrator | 18:57:35.348 STDOUT terraform:  } 2025-06-05 18:57:35.348991 | orchestrator | 18:57:35.348 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-05 18:57:35.349057 | orchestrator | 18:57:35.348 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-05 18:57:35.349091 | orchestrator | 18:57:35.349 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-05 18:57:35.349124 | orchestrator | 18:57:35.349 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-05 18:57:35.349157 | orchestrator | 18:57:35.349 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.349192 | orchestrator | 18:57:35.349 STDOUT terraform:  + port_id = (known after apply) 2025-06-05 18:57:35.349226 | orchestrator | 18:57:35.349 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.349233 | orchestrator | 18:57:35.349 STDOUT terraform:  } 2025-06-05 18:57:35.349327 | orchestrator | 18:57:35.349 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-05 18:57:35.349381 | orchestrator | 18:57:35.349 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-05 18:57:35.349413 | orchestrator | 18:57:35.349 STDOUT terraform:  + address = (known after apply) 2025-06-05 18:57:35.349444 | orchestrator | 18:57:35.349 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.349474 | orchestrator | 18:57:35.349 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-05 18:57:35.349505 | orchestrator | 18:57:35.349 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.349535 | orchestrator | 18:57:35.349 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-05 18:57:35.349565 | orchestrator | 18:57:35.349 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.349585 | orchestrator | 18:57:35.349 STDOUT terraform:  + pool = "public" 2025-06-05 18:57:35.349616 | orchestrator | 18:57:35.349 STDOUT terraform:  + port_id = (known after apply) 2025-06-05 18:57:35.349645 | orchestrator | 18:57:35.349 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.349680 | orchestrator | 18:57:35.349 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.349702 | orchestrator | 18:57:35.349 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.349709 | orchestrator | 18:57:35.349 STDOUT terraform:  } 2025-06-05 18:57:35.349765 | orchestrator | 18:57:35.349 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-05 18:57:35.349818 | orchestrator | 18:57:35.349 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-05 18:57:35.349860 | orchestrator | 18:57:35.349 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.349900 | orchestrator | 18:57:35.349 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.349920 | orchestrator | 18:57:35.349 STDOUT terraform:  + availability_zone_hints = [ 2025-06-05 18:57:35.349927 | orchestrator | 18:57:35.349 STDOUT terraform:  + "nova", 2025-06-05 18:57:35.349947 | orchestrator | 18:57:35.349 STDOUT terraform:  ] 2025-06-05 18:57:35.349985 | orchestrator | 18:57:35.349 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-05 18:57:35.350029 | orchestrator | 18:57:35.349 STDOUT terraform:  + external = (known after apply) 2025-06-05 18:57:35.350085 | orchestrator | 18:57:35.350 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.350126 | orchestrator | 18:57:35.350 STDOUT terraform:  + mtu = (known after apply) 2025-06-05 18:57:35.350170 | orchestrator | 18:57:35.350 STDOUT terraform:  + name = "net-testbed-management" 2025-06-05 18:57:35.350210 | orchestrator | 18:57:35.350 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.350250 | orchestrator | 18:57:35.350 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.350304 | orchestrator | 18:57:35.350 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.350346 | orchestrator | 18:57:35.350 STDOUT terraform:  + shared = (known after apply) 2025-06-05 18:57:35.350387 | orchestrator | 18:57:35.350 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.350427 | orchestrator | 18:57:35.350 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-05 18:57:35.350457 | orchestrator | 18:57:35.350 STDOUT terraform:  + segments (known after apply) 2025-06-05 18:57:35.350465 | orchestrator | 18:57:35.350 STDOUT terraform:  } 2025-06-05 18:57:35.350517 | orchestrator | 18:57:35.350 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-05 18:57:35.350568 | orchestrator | 18:57:35.350 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-05 18:57:35.350607 | orchestrator | 18:57:35.350 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.350647 | orchestrator | 18:57:35.350 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.350686 | orchestrator | 18:57:35.350 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.350726 | orchestrator | 18:57:35.350 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.350766 | orchestrator | 18:57:35.350 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.350804 | orchestrator | 18:57:35.350 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.350844 | orchestrator | 18:57:35.350 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.350883 | orchestrator | 18:57:35.350 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.350923 | orchestrator | 18:57:35.350 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.350962 | orchestrator | 18:57:35.350 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.351001 | orchestrator | 18:57:35.350 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.351042 | orchestrator | 18:57:35.350 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.351081 | orchestrator | 18:57:35.351 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.351120 | orchestrator | 18:57:35.351 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.351160 | orchestrator | 18:57:35.351 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.351201 | orchestrator | 18:57:35.351 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.351224 | orchestrator | 18:57:35.351 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.351255 | orchestrator | 18:57:35.351 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.351263 | orchestrator | 18:57:35.351 STDOUT terraform:  } 2025-06-05 18:57:35.351342 | orchestrator | 18:57:35.351 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.351352 | orchestrator | 18:57:35.351 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.351364 | orchestrator | 18:57:35.351 STDOUT terraform:  } 2025-06-05 18:57:35.351394 | orchestrator | 18:57:35.351 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.351403 | orchestrator | 18:57:35.351 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.351437 | orchestrator | 18:57:35.351 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-05 18:57:35.351468 | orchestrator | 18:57:35.351 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.351476 | orchestrator | 18:57:35.351 STDOUT terraform:  } 2025-06-05 18:57:35.351495 | orchestrator | 18:57:35.351 STDOUT terraform:  } 2025-06-05 18:57:35.351545 | orchestrator | 18:57:35.351 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-05 18:57:35.351594 | orchestrator | 18:57:35.351 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.351633 | orchestrator | 18:57:35.351 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.351673 | orchestrator | 18:57:35.351 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.351713 | orchestrator | 18:57:35.351 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.351752 | orchestrator | 18:57:35.351 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.351791 | orchestrator | 18:57:35.351 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.351830 | orchestrator | 18:57:35.351 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.351868 | orchestrator | 18:57:35.351 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.351903 | orchestrator | 18:57:35.351 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.351939 | orchestrator | 18:57:35.351 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.351976 | orchestrator | 18:57:35.351 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.352012 | orchestrator | 18:57:35.351 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.352047 | orchestrator | 18:57:35.352 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.352085 | orchestrator | 18:57:35.352 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.352123 | orchestrator | 18:57:35.352 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.352158 | orchestrator | 18:57:35.352 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.352196 | orchestrator | 18:57:35.352 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.352221 | orchestrator | 18:57:35.352 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.352245 | orchestrator | 18:57:35.352 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.352253 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352271 | orchestrator | 18:57:35.352 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.352314 | orchestrator | 18:57:35.352 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.352325 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352332 | orchestrator | 18:57:35.352 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.352367 | orchestrator | 18:57:35.352 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.352374 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352398 | orchestrator | 18:57:35.352 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.352426 | orchestrator | 18:57:35.352 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.352434 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352461 | orchestrator | 18:57:35.352 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.352468 | orchestrator | 18:57:35.352 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.352498 | orchestrator | 18:57:35.352 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-05 18:57:35.352527 | orchestrator | 18:57:35.352 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.352535 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352541 | orchestrator | 18:57:35.352 STDOUT terraform:  } 2025-06-05 18:57:35.352593 | orchestrator | 18:57:35.352 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-05 18:57:35.352639 | orchestrator | 18:57:35.352 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.352675 | orchestrator | 18:57:35.352 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.352712 | orchestrator | 18:57:35.352 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.352749 | orchestrator | 18:57:35.352 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.352789 | orchestrator | 18:57:35.352 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.352821 | orchestrator | 18:57:35.352 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.352858 | orchestrator | 18:57:35.352 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.352896 | orchestrator | 18:57:35.352 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.352930 | orchestrator | 18:57:35.352 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.352970 | orchestrator | 18:57:35.352 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.353006 | orchestrator | 18:57:35.352 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.353042 | orchestrator | 18:57:35.352 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.353074 | orchestrator | 18:57:35.353 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.353114 | orchestrator | 18:57:35.353 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.353152 | orchestrator | 18:57:35.353 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.353188 | orchestrator | 18:57:35.353 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.353224 | orchestrator | 18:57:35.353 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.353249 | orchestrator | 18:57:35.353 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.353269 | orchestrator | 18:57:35.353 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.353289 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353308 | orchestrator | 18:57:35.353 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.353339 | orchestrator | 18:57:35.353 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.353347 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353365 | orchestrator | 18:57:35.353 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.353394 | orchestrator | 18:57:35.353 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.353402 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353420 | orchestrator | 18:57:35.353 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.353450 | orchestrator | 18:57:35.353 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.353457 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353483 | orchestrator | 18:57:35.353 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.353491 | orchestrator | 18:57:35.353 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.353519 | orchestrator | 18:57:35.353 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-05 18:57:35.353548 | orchestrator | 18:57:35.353 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.353555 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353569 | orchestrator | 18:57:35.353 STDOUT terraform:  } 2025-06-05 18:57:35.353614 | orchestrator | 18:57:35.353 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-05 18:57:35.353658 | orchestrator | 18:57:35.353 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.353694 | orchestrator | 18:57:35.353 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.353732 | orchestrator | 18:57:35.353 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.353767 | orchestrator | 18:57:35.353 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.353804 | orchestrator | 18:57:35.353 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.353840 | orchestrator | 18:57:35.353 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.353877 | orchestrator | 18:57:35.353 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.353912 | orchestrator | 18:57:35.353 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.353948 | orchestrator | 18:57:35.353 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.353988 | orchestrator | 18:57:35.353 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.354021 | orchestrator | 18:57:35.353 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.354067 | orchestrator | 18:57:35.354 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.354104 | orchestrator | 18:57:35.354 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.354140 | orchestrator | 18:57:35.354 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.354177 | orchestrator | 18:57:35.354 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.354212 | orchestrator | 18:57:35.354 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.354248 | orchestrator | 18:57:35.354 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.354268 | orchestrator | 18:57:35.354 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.354308 | orchestrator | 18:57:35.354 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.354316 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354483 | orchestrator | 18:57:35.354 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.354576 | orchestrator | 18:57:35.354 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.354592 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354604 | orchestrator | 18:57:35.354 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.354615 | orchestrator | 18:57:35.354 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.354638 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354650 | orchestrator | 18:57:35.354 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.354661 | orchestrator | 18:57:35.354 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.354844 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354857 | orchestrator | 18:57:35.354 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.354868 | orchestrator | 18:57:35.354 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.354879 | orchestrator | 18:57:35.354 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-05 18:57:35.354890 | orchestrator | 18:57:35.354 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.354901 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354912 | orchestrator | 18:57:35.354 STDOUT terraform:  } 2025-06-05 18:57:35.354929 | orchestrator | 18:57:35.354 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-05 18:57:35.354941 | orchestrator | 18:57:35.354 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.354953 | orchestrator | 18:57:35.354 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.354964 | orchestrator | 18:57:35.354 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.354975 | orchestrator | 18:57:35.354 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.354987 | orchestrator | 18:57:35.354 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.354998 | orchestrator | 18:57:35.354 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.355012 | orchestrator | 18:57:35.354 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.355024 | orchestrator | 18:57:35.354 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.355035 | orchestrator | 18:57:35.354 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.355050 | orchestrator | 18:57:35.354 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.355064 | orchestrator | 18:57:35.355 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.355103 | orchestrator | 18:57:35.355 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.355140 | orchestrator | 18:57:35.355 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.355189 | orchestrator | 18:57:35.355 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.355207 | orchestrator | 18:57:35.355 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.355243 | orchestrator | 18:57:35.355 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.355313 | orchestrator | 18:57:35.355 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.355332 | orchestrator | 18:57:35.355 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.355369 | orchestrator | 18:57:35.355 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.355382 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355396 | orchestrator | 18:57:35.355 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.355411 | orchestrator | 18:57:35.355 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.355436 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355447 | orchestrator | 18:57:35.355 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.355461 | orchestrator | 18:57:35.355 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.355475 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355489 | orchestrator | 18:57:35.355 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.355516 | orchestrator | 18:57:35.355 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.355531 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355545 | orchestrator | 18:57:35.355 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.355559 | orchestrator | 18:57:35.355 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.355575 | orchestrator | 18:57:35.355 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-05 18:57:35.355617 | orchestrator | 18:57:35.355 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.355630 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355645 | orchestrator | 18:57:35.355 STDOUT terraform:  } 2025-06-05 18:57:35.355684 | orchestrator | 18:57:35.355 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-05 18:57:35.355731 | orchestrator | 18:57:35.355 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.355759 | orchestrator | 18:57:35.355 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.355800 | orchestrator | 18:57:35.355 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.355838 | orchestrator | 18:57:35.355 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.355879 | orchestrator | 18:57:35.355 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.355895 | orchestrator | 18:57:35.355 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.355939 | orchestrator | 18:57:35.355 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.355966 | orchestrator | 18:57:35.355 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.356009 | orchestrator | 18:57:35.355 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.356045 | orchestrator | 18:57:35.355 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.356082 | orchestrator | 18:57:35.356 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.356117 | orchestrator | 18:57:35.356 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.356153 | orchestrator | 18:57:35.356 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.356189 | orchestrator | 18:57:35.356 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.356227 | orchestrator | 18:57:35.356 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.356270 | orchestrator | 18:57:35.356 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.356343 | orchestrator | 18:57:35.356 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.356355 | orchestrator | 18:57:35.356 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.356366 | orchestrator | 18:57:35.356 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.356381 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356392 | orchestrator | 18:57:35.356 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.356406 | orchestrator | 18:57:35.356 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.356418 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356432 | orchestrator | 18:57:35.356 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.356447 | orchestrator | 18:57:35.356 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.356461 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356475 | orchestrator | 18:57:35.356 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.356489 | orchestrator | 18:57:35.356 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.356504 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356532 | orchestrator | 18:57:35.356 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.356548 | orchestrator | 18:57:35.356 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.356562 | orchestrator | 18:57:35.356 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-05 18:57:35.356591 | orchestrator | 18:57:35.356 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.356606 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356620 | orchestrator | 18:57:35.356 STDOUT terraform:  } 2025-06-05 18:57:35.356726 | orchestrator | 18:57:35.356 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-05 18:57:35.356755 | orchestrator | 18:57:35.356 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-05 18:57:35.356762 | orchestrator | 18:57:35.356 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.356783 | orchestrator | 18:57:35.356 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-05 18:57:35.356819 | orchestrator | 18:57:35.356 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-05 18:57:35.356856 | orchestrator | 18:57:35.356 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.356892 | orchestrator | 18:57:35.356 STDOUT terraform:  + device_id = (known after apply) 2025-06-05 18:57:35.356928 | orchestrator | 18:57:35.356 STDOUT terraform:  + device_owner = (known after apply) 2025-06-05 18:57:35.356964 | orchestrator | 18:57:35.356 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-05 18:57:35.357001 | orchestrator | 18:57:35.356 STDOUT terraform:  + dns_name = (known after apply) 2025-06-05 18:57:35.357039 | orchestrator | 18:57:35.356 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.357073 | orchestrator | 18:57:35.357 STDOUT terraform:  + mac_address = (known after apply) 2025-06-05 18:57:35.358106 | orchestrator | 18:57:35.357 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.358145 | orchestrator | 18:57:35.357 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-05 18:57:35.358151 | orchestrator | 18:57:35.357 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-05 18:57:35.358155 | orchestrator | 18:57:35.357 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.358159 | orchestrator | 18:57:35.357 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-05 18:57:35.358163 | orchestrator | 18:57:35.357 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.358176 | orchestrator | 18:57:35.357 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.358180 | orchestrator | 18:57:35.357 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-05 18:57:35.358185 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358189 | orchestrator | 18:57:35.357 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.358192 | orchestrator | 18:57:35.357 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-05 18:57:35.358196 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358200 | orchestrator | 18:57:35.357 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.358204 | orchestrator | 18:57:35.357 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-05 18:57:35.358208 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358211 | orchestrator | 18:57:35.357 STDOUT terraform:  + allowed_address_pairs { 2025-06-05 18:57:35.358215 | orchestrator | 18:57:35.357 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-05 18:57:35.358219 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358222 | orchestrator | 18:57:35.357 STDOUT terraform:  + binding (known after apply) 2025-06-05 18:57:35.358226 | orchestrator | 18:57:35.357 STDOUT terraform:  + fixed_ip { 2025-06-05 18:57:35.358230 | orchestrator | 18:57:35.357 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-05 18:57:35.358234 | orchestrator | 18:57:35.357 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.358237 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358241 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358245 | orchestrator | 18:57:35.357 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-05 18:57:35.358250 | orchestrator | 18:57:35.357 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-05 18:57:35.358254 | orchestrator | 18:57:35.357 STDOUT terraform:  + force_destroy = false 2025-06-05 18:57:35.358257 | orchestrator | 18:57:35.357 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.358261 | orchestrator | 18:57:35.357 STDOUT terraform:  + port_id = (known after apply) 2025-06-05 18:57:35.358265 | orchestrator | 18:57:35.357 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.358269 | orchestrator | 18:57:35.357 STDOUT terraform:  + router_id = (known after apply) 2025-06-05 18:57:35.358292 | orchestrator | 18:57:35.357 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-05 18:57:35.358297 | orchestrator | 18:57:35.357 STDOUT terraform:  } 2025-06-05 18:57:35.358300 | orchestrator | 18:57:35.357 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-05 18:57:35.358304 | orchestrator | 18:57:35.357 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-05 18:57:35.358308 | orchestrator | 18:57:35.357 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-05 18:57:35.358312 | orchestrator | 18:57:35.357 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.358316 | orchestrator | 18:57:35.357 STDOUT terraform:  + availability_zone_hints = [ 2025-06-05 18:57:35.358319 | orchestrator | 18:57:35.357 STDOUT terraform:  + "nova", 2025-06-05 18:57:35.358323 | orchestrator | 18:57:35.357 STDOUT terraform:  ] 2025-06-05 18:57:35.358337 | orchestrator | 18:57:35.357 STDOUT terraform:  + distributed = (known after apply) 2025-06-05 18:57:35.358341 | orchestrator | 18:57:35.358 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-05 18:57:35.358345 | orchestrator | 18:57:35.358 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-05 18:57:35.358349 | orchestrator | 18:57:35.358 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.358352 | orchestrator | 18:57:35.358 STDOUT terraform:  + name = "testbed" 2025-06-05 18:57:35.358356 | orchestrator | 18:57:35.358 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.358360 | orchestrator | 18:57:35.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.358364 | orchestrator | 18:57:35.358 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-05 18:57:35.358368 | orchestrator | 18:57:35.358 STDOUT terraform:  } 2025-06-05 18:57:35.358372 | orchestrator | 18:57:35.358 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-05 18:57:35.358402 | orchestrator | 18:57:35.358 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-05 18:57:35.358408 | orchestrator | 18:57:35.358 STDOUT terraform:  + description = "ssh" 2025-06-05 18:57:35.358448 | orchestrator | 18:57:35.358 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.358454 | orchestrator | 18:57:35.358 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.358496 | orchestrator | 18:57:35.358 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.358503 | orchestrator | 18:57:35.358 STDOUT terraform:  + port_range_max = 22 2025-06-05 18:57:35.358531 | orchestrator | 18:57:35.358 STDOUT terraform:  + port_range_min = 22 2025-06-05 18:57:35.358557 | orchestrator | 18:57:35.358 STDOUT terraform:  + protocol = "tcp" 2025-06-05 18:57:35.358584 | orchestrator | 18:57:35.358 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.358614 | orchestrator | 18:57:35.358 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.358634 | orchestrator | 18:57:35.358 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.358665 | orchestrator | 18:57:35.358 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.358696 | orchestrator | 18:57:35.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.358703 | orchestrator | 18:57:35.358 STDOUT terraform:  } 2025-06-05 18:57:35.358764 | orchestrator | 18:57:35.358 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-05 18:57:35.358813 | orchestrator | 18:57:35.358 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-05 18:57:35.358839 | orchestrator | 18:57:35.358 STDOUT terraform:  + description = "wireguard" 2025-06-05 18:57:35.358859 | orchestrator | 18:57:35.358 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.358879 | orchestrator | 18:57:35.358 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.358909 | orchestrator | 18:57:35.358 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.358915 | orchestrator | 18:57:35.358 STDOUT terraform:  + port_range_max = 51820 2025-06-05 18:57:35.358946 | orchestrator | 18:57:35.358 STDOUT terraform:  + port_range_min = 51820 2025-06-05 18:57:35.358970 | orchestrator | 18:57:35.358 STDOUT terraform:  + protocol = "udp" 2025-06-05 18:57:35.358997 | orchestrator | 18:57:35.358 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.359028 | orchestrator | 18:57:35.358 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.359054 | orchestrator | 18:57:35.359 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.359084 | orchestrator | 18:57:35.359 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.359114 | orchestrator | 18:57:35.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.359125 | orchestrator | 18:57:35.359 STDOUT terraform:  } 2025-06-05 18:57:35.359177 | orchestrator | 18:57:35.359 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-05 18:57:35.359230 | orchestrator | 18:57:35.359 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-05 18:57:35.359250 | orchestrator | 18:57:35.359 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.359294 | orchestrator | 18:57:35.359 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.359332 | orchestrator | 18:57:35.359 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.359357 | orchestrator | 18:57:35.359 STDOUT terraform:  + protocol = "tcp" 2025-06-05 18:57:35.359384 | orchestrator | 18:57:35.359 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.359416 | orchestrator | 18:57:35.359 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.359447 | orchestrator | 18:57:35.359 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-05 18:57:35.359477 | orchestrator | 18:57:35.359 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.359507 | orchestrator | 18:57:35.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.359517 | orchestrator | 18:57:35.359 STDOUT terraform:  } 2025-06-05 18:57:35.359571 | orchestrator | 18:57:35.359 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-05 18:57:35.359623 | orchestrator | 18:57:35.359 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-05 18:57:35.359643 | orchestrator | 18:57:35.359 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.359662 | orchestrator | 18:57:35.359 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.359694 | orchestrator | 18:57:35.359 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.359713 | orchestrator | 18:57:35.359 STDOUT terraform:  + protocol = "udp" 2025-06-05 18:57:35.359742 | orchestrator | 18:57:35.359 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.359773 | orchestrator | 18:57:35.359 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.362765 | orchestrator | 18:57:35.359 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-05 18:57:35.362789 | orchestrator | 18:57:35.359 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.362794 | orchestrator | 18:57:35.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.362798 | orchestrator | 18:57:35.359 STDOUT terraform:  } 2025-06-05 18:57:35.362802 | orchestrator | 18:57:35.359 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-05 18:57:35.362806 | orchestrator | 18:57:35.359 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-05 18:57:35.362810 | orchestrator | 18:57:35.359 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.362814 | orchestrator | 18:57:35.359 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.362818 | orchestrator | 18:57:35.359 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.362821 | orchestrator | 18:57:35.360 STDOUT terraform:  + protocol = "icmp" 2025-06-05 18:57:35.362825 | orchestrator | 18:57:35.360 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.362829 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.362832 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.362836 | orchestrator | 18:57:35.360 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.362840 | orchestrator | 18:57:35.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.362843 | orchestrator | 18:57:35.360 STDOUT terraform:  } 2025-06-05 18:57:35.362847 | orchestrator | 18:57:35.360 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-05 18:57:35.362851 | orchestrator | 18:57:35.360 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-05 18:57:35.362855 | orchestrator | 18:57:35.360 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.362859 | orchestrator | 18:57:35.360 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.362871 | orchestrator | 18:57:35.360 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.362880 | orchestrator | 18:57:35.360 STDOUT terraform:  + protocol = "tcp" 2025-06-05 18:57:35.362884 | orchestrator | 18:57:35.360 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.362887 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.362891 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.362895 | orchestrator | 18:57:35.360 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.362898 | orchestrator | 18:57:35.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.362902 | orchestrator | 18:57:35.360 STDOUT terraform:  } 2025-06-05 18:57:35.362906 | orchestrator | 18:57:35.360 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-05 18:57:35.362910 | orchestrator | 18:57:35.360 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-05 18:57:35.362914 | orchestrator | 18:57:35.360 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.362917 | orchestrator | 18:57:35.360 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.362921 | orchestrator | 18:57:35.360 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.362925 | orchestrator | 18:57:35.360 STDOUT terraform:  + protocol = "udp" 2025-06-05 18:57:35.362929 | orchestrator | 18:57:35.360 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.362932 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.362942 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.362946 | orchestrator | 18:57:35.360 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.362950 | orchestrator | 18:57:35.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.362953 | orchestrator | 18:57:35.360 STDOUT terraform:  } 2025-06-05 18:57:35.362957 | orchestrator | 18:57:35.360 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-05 18:57:35.362961 | orchestrator | 18:57:35.360 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-05 18:57:35.362965 | orchestrator | 18:57:35.360 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.362968 | orchestrator | 18:57:35.360 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.362972 | orchestrator | 18:57:35.360 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.362976 | orchestrator | 18:57:35.360 STDOUT terraform:  + protocol = "icmp" 2025-06-05 18:57:35.362979 | orchestrator | 18:57:35.360 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.362983 | orchestrator | 18:57:35.360 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.362987 | orchestrator | 18:57:35.361 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.362990 | orchestrator | 18:57:35.361 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.362998 | orchestrator | 18:57:35.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.363002 | orchestrator | 18:57:35.361 STDOUT terraform:  } 2025-06-05 18:57:35.363005 | orchestrator | 18:57:35.361 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-05 18:57:35.363009 | orchestrator | 18:57:35.361 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-05 18:57:35.363013 | orchestrator | 18:57:35.361 STDOUT terraform:  + description = "vrrp" 2025-06-05 18:57:35.363017 | orchestrator | 18:57:35.361 STDOUT terraform:  + direction = "ingress" 2025-06-05 18:57:35.363021 | orchestrator | 18:57:35.361 STDOUT terraform:  + ethertype = "IPv4" 2025-06-05 18:57:35.363024 | orchestrator | 18:57:35.361 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363030 | orchestrator | 18:57:35.361 STDOUT terraform:  + protocol = "112" 2025-06-05 18:57:35.363034 | orchestrator | 18:57:35.361 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.363038 | orchestrator | 18:57:35.361 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-05 18:57:35.363042 | orchestrator | 18:57:35.361 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-05 18:57:35.363045 | orchestrator | 18:57:35.361 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-05 18:57:35.363049 | orchestrator | 18:57:35.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.363053 | orchestrator | 18:57:35.361 STDOUT terraform:  } 2025-06-05 18:57:35.363056 | orchestrator | 18:57:35.361 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-05 18:57:35.363061 | orchestrator | 18:57:35.361 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-05 18:57:35.363064 | orchestrator | 18:57:35.361 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.363068 | orchestrator | 18:57:35.361 STDOUT terraform:  + description = "management security group" 2025-06-05 18:57:35.363072 | orchestrator | 18:57:35.361 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363075 | orchestrator | 18:57:35.361 STDOUT terraform:  + name = "testbed-management" 2025-06-05 18:57:35.363079 | orchestrator | 18:57:35.361 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.363083 | orchestrator | 18:57:35.361 STDOUT terraform:  + stateful = (known after apply) 2025-06-05 18:57:35.363087 | orchestrator | 18:57:35.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.363094 | orchestrator | 18:57:35.361 STDOUT terraform:  } 2025-06-05 18:57:35.363098 | orchestrator | 18:57:35.361 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-05 18:57:35.363101 | orchestrator | 18:57:35.361 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-05 18:57:35.363105 | orchestrator | 18:57:35.361 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.363109 | orchestrator | 18:57:35.361 STDOUT terraform:  + description = "node security group" 2025-06-05 18:57:35.363116 | orchestrator | 18:57:35.361 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363119 | orchestrator | 18:57:35.361 STDOUT terraform:  + name = "testbed-node" 2025-06-05 18:57:35.363123 | orchestrator | 18:57:35.361 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.363127 | orchestrator | 18:57:35.361 STDOUT terraform:  + stateful = (known after apply) 2025-06-05 18:57:35.363130 | orchestrator | 18:57:35.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.363134 | orchestrator | 18:57:35.361 STDOUT terraform:  } 2025-06-05 18:57:35.363138 | orchestrator | 18:57:35.361 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-05 18:57:35.363142 | orchestrator | 18:57:35.362 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-05 18:57:35.363145 | orchestrator | 18:57:35.362 STDOUT terraform:  + all_tags = (known after apply) 2025-06-05 18:57:35.363149 | orchestrator | 18:57:35.362 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-05 18:57:35.363153 | orchestrator | 18:57:35.362 STDOUT terraform:  + dns_nameservers = [ 2025-06-05 18:57:35.363156 | orchestrator | 18:57:35.362 STDOUT terraform:  + "8.8.8.8", 2025-06-05 18:57:35.363160 | orchestrator | 18:57:35.362 STDOUT terraform:  + "9.9.9.9", 2025-06-05 18:57:35.363164 | orchestrator | 18:57:35.362 STDOUT terraform:  ] 2025-06-05 18:57:35.363167 | orchestrator | 18:57:35.362 STDOUT terraform:  + enable_dhcp = true 2025-06-05 18:57:35.363171 | orchestrator | 18:57:35.362 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-05 18:57:35.363175 | orchestrator | 18:57:35.362 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363179 | orchestrator | 18:57:35.362 STDOUT terraform:  + ip_version = 4 2025-06-05 18:57:35.363182 | orchestrator | 18:57:35.362 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-05 18:57:35.363186 | orchestrator | 18:57:35.362 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-05 18:57:35.363190 | orchestrator | 18:57:35.362 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-05 18:57:35.363193 | orchestrator | 18:57:35.362 STDOUT terraform:  + network_id = (known after apply) 2025-06-05 18:57:35.363197 | orchestrator | 18:57:35.362 STDOUT terraform:  + no_gateway = false 2025-06-05 18:57:35.363201 | orchestrator | 18:57:35.362 STDOUT terraform:  + region = (known after apply) 2025-06-05 18:57:35.363204 | orchestrator | 18:57:35.362 STDOUT terraform:  + service_types = (known after apply) 2025-06-05 18:57:35.363208 | orchestrator | 18:57:35.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-05 18:57:35.363212 | orchestrator | 18:57:35.362 STDOUT terraform:  + allocation_pool { 2025-06-05 18:57:35.363216 | orchestrator | 18:57:35.362 STDOUT terraform:  + end = "192.168.31.250" 2025-06-05 18:57:35.363219 | orchestrator | 18:57:35.362 STDOUT terraform:  + start = "192.168.31.200" 2025-06-05 18:57:35.363223 | orchestrator | 18:57:35.362 STDOUT terraform:  } 2025-06-05 18:57:35.363227 | orchestrator | 18:57:35.362 STDOUT terraform:  } 2025-06-05 18:57:35.363233 | orchestrator | 18:57:35.362 STDOUT terraform:  # terraform_data.image will be created 2025-06-05 18:57:35.363259 | orchestrator | 18:57:35.362 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-05 18:57:35.363264 | orchestrator | 18:57:35.362 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363270 | orchestrator | 18:57:35.362 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-05 18:57:35.363308 | orchestrator | 18:57:35.362 STDOUT terraform:  + output = (known after apply) 2025-06-05 18:57:35.363313 | orchestrator | 18:57:35.362 STDOUT terraform:  } 2025-06-05 18:57:35.363317 | orchestrator | 18:57:35.362 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-05 18:57:35.363320 | orchestrator | 18:57:35.362 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-05 18:57:35.363324 | orchestrator | 18:57:35.362 STDOUT terraform:  + id = (known after apply) 2025-06-05 18:57:35.363328 | orchestrator | 18:57:35.362 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-05 18:57:35.363332 | orchestrator | 18:57:35.362 STDOUT terraform:  + output = (known after apply) 2025-06-05 18:57:35.363335 | orchestrator | 18:57:35.362 STDOUT terraform:  } 2025-06-05 18:57:35.363339 | orchestrator | 18:57:35.362 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-05 18:57:35.363343 | orchestrator | 18:57:35.362 STDOUT terraform: Changes to Outputs: 2025-06-05 18:57:35.363347 | orchestrator | 18:57:35.362 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-05 18:57:35.363350 | orchestrator | 18:57:35.362 STDOUT terraform:  + private_key = (sensitive value) 2025-06-05 18:57:35.560739 | orchestrator | 18:57:35.560 STDOUT terraform: terraform_data.image: Creating... 2025-06-05 18:57:35.560924 | orchestrator | 18:57:35.560 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-05 18:57:35.561985 | orchestrator | 18:57:35.561 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=476287ad-d417-e53e-695a-83c146b1ca5e] 2025-06-05 18:57:35.564207 | orchestrator | 18:57:35.563 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=c5907988-a870-5d00-e399-b92c1384779f] 2025-06-05 18:57:35.575171 | orchestrator | 18:57:35.575 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-05 18:57:35.575376 | orchestrator | 18:57:35.575 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-05 18:57:35.575561 | orchestrator | 18:57:35.575 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-05 18:57:35.576610 | orchestrator | 18:57:35.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-05 18:57:35.581033 | orchestrator | 18:57:35.580 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-05 18:57:35.582927 | orchestrator | 18:57:35.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-05 18:57:35.583191 | orchestrator | 18:57:35.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-05 18:57:35.585958 | orchestrator | 18:57:35.585 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-05 18:57:35.587107 | orchestrator | 18:57:35.586 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-05 18:57:35.592246 | orchestrator | 18:57:35.592 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-05 18:57:41.607862 | orchestrator | 18:57:41.607 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=825009d0-00d8-4825-b11d-1fae4e25c02d] 2025-06-05 18:57:41.613403 | orchestrator | 18:57:41.613 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-05 18:57:41.733088 | orchestrator | 18:57:41.732 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-06-05 18:57:41.738158 | orchestrator | 18:57:41.737 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-05 18:57:41.807982 | orchestrator | 18:57:41.807 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-05 18:57:41.812503 | orchestrator | 18:57:41.812 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-05 18:57:41.860356 | orchestrator | 18:57:41.859 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-05 18:57:41.871628 | orchestrator | 18:57:41.871 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-05 18:57:45.577217 | orchestrator | 18:57:45.576 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-05 18:57:45.577372 | orchestrator | 18:57:45.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-05 18:57:45.578148 | orchestrator | 18:57:45.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-05 18:57:45.578173 | orchestrator | 18:57:45.577 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-05 18:57:45.582471 | orchestrator | 18:57:45.582 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-05 18:57:45.583637 | orchestrator | 18:57:45.583 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-05 18:57:45.584703 | orchestrator | 18:57:45.584 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-05 18:57:45.587002 | orchestrator | 18:57:45.586 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-05 18:57:45.593540 | orchestrator | 18:57:45.593 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-05 18:57:46.193963 | orchestrator | 18:57:46.193 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=24c03cc2-b2a5-4cf8-8852-1f4dda86236b] 2025-06-05 18:57:46.204984 | orchestrator | 18:57:46.204 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=50a4d034-c5f0-4330-a7d8-ab894b1f0c25] 2025-06-05 18:57:46.208163 | orchestrator | 18:57:46.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=da89fb13-3694-40ae-a272-70fb90f4e55f] 2025-06-05 18:57:46.212491 | orchestrator | 18:57:46.210 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-05 18:57:46.217246 | orchestrator | 18:57:46.217 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-05 18:57:46.217599 | orchestrator | 18:57:46.217 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=f635ef261a8a2b96319238a93ddb8da176fa77a1] 2025-06-05 18:57:46.220480 | orchestrator | 18:57:46.220 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=cf03b960-33f8-4fd5-8bea-a02272b072d8] 2025-06-05 18:57:46.224674 | orchestrator | 18:57:46.224 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-05 18:57:46.228127 | orchestrator | 18:57:46.227 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-05 18:57:46.230665 | orchestrator | 18:57:46.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-05 18:57:46.231633 | orchestrator | 18:57:46.231 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=9e1a620b5b29d4a22fc7830050ce512bcee45cb8] 2025-06-05 18:57:46.239796 | orchestrator | 18:57:46.239 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=10a1977a-d4e6-4a8b-a76c-bb8b1466bde2] 2025-06-05 18:57:46.240260 | orchestrator | 18:57:46.240 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-05 18:57:46.241449 | orchestrator | 18:57:46.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=4472eb6b-1c6e-42f9-be0b-d37693300441] 2025-06-05 18:57:46.246530 | orchestrator | 18:57:46.246 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312] 2025-06-05 18:57:46.247201 | orchestrator | 18:57:46.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-05 18:57:46.247749 | orchestrator | 18:57:46.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-05 18:57:46.250910 | orchestrator | 18:57:46.250 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-05 18:57:46.263666 | orchestrator | 18:57:46.263 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=648969e3-6dd4-4b8b-ace0-3e999cf7526e] 2025-06-05 18:57:46.290936 | orchestrator | 18:57:46.290 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=9365a1ca-de8d-4d50-b195-b3372d88a766] 2025-06-05 18:57:47.768590 | orchestrator | 18:57:47.768 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=37e04ebf-1ca1-4477-9c0d-ca02bc1ff810] 2025-06-05 18:57:47.777400 | orchestrator | 18:57:47.777 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-05 18:57:55.298108 | orchestrator | 18:57:55.297 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 7s [id=1aa019f7-a0c8-4ef0-b7e8-023632c20c8c] 2025-06-05 18:57:55.308640 | orchestrator | 18:57:55.308 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-05 18:57:55.309835 | orchestrator | 18:57:55.309 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-05 18:57:55.312920 | orchestrator | 18:57:55.312 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-05 18:57:55.504192 | orchestrator | 18:57:55.503 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=e2897d48-aee1-4d4f-b473-39eae552cd44] 2025-06-05 18:57:55.519738 | orchestrator | 18:57:55.519 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=391d0d69-6099-42a7-b156-3c57cbbf152b] 2025-06-05 18:57:55.523848 | orchestrator | 18:57:55.523 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-05 18:57:55.536464 | orchestrator | 18:57:55.536 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-05 18:57:56.218472 | orchestrator | 18:57:56.218 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-05 18:57:56.233570 | orchestrator | 18:57:56.233 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-05 18:57:56.238090 | orchestrator | 18:57:56.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-05 18:57:56.242073 | orchestrator | 18:57:56.241 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-05 18:57:56.248405 | orchestrator | 18:57:56.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-05 18:57:56.248480 | orchestrator | 18:57:56.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-05 18:57:56.251745 | orchestrator | 18:57:56.251 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-05 18:57:56.582248 | orchestrator | 18:57:56.581 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=d77cc427-936e-41af-8b88-c14019752c42] 2025-06-05 18:57:56.589066 | orchestrator | 18:57:56.588 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 11s [id=3640e00a-7211-4496-a331-9499d5efe8aa] 2025-06-05 18:57:56.601357 | orchestrator | 18:57:56.601 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-05 18:57:56.604662 | orchestrator | 18:57:56.604 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-05 18:57:56.605928 | orchestrator | 18:57:56.605 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=77274211-aee3-4072-87ff-8de0b78784a9] 2025-06-05 18:57:56.617502 | orchestrator | 18:57:56.617 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=5f400765-e9cd-4d3d-a972-34bb4ad0edb4] 2025-06-05 18:57:56.621946 | orchestrator | 18:57:56.621 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-05 18:57:56.627842 | orchestrator | 18:57:56.627 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-05 18:57:56.639070 | orchestrator | 18:57:56.638 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=cfa42d43-a7d6-4bf7-99bb-aae9db75ee30] 2025-06-05 18:57:56.642399 | orchestrator | 18:57:56.642 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=077eafe1-9404-44ab-9d2f-e62cd06db711] 2025-06-05 18:57:56.646049 | orchestrator | 18:57:56.645 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-05 18:57:56.648488 | orchestrator | 18:57:56.648 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-05 18:57:56.656393 | orchestrator | 18:57:56.656 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=38d524cb-058b-4154-b8dc-2ef4d020f5e0] 2025-06-05 18:57:56.663938 | orchestrator | 18:57:56.663 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-05 18:57:56.796928 | orchestrator | 18:57:56.796 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=fb0d620d-3d27-46e8-a73f-65a2c37f5c6f] 2025-06-05 18:57:56.814846 | orchestrator | 18:57:56.814 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-05 18:57:56.948359 | orchestrator | 18:57:56.947 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=cfc2ce32-d4cc-4218-a3d0-e34c23cea535] 2025-06-05 18:57:56.957251 | orchestrator | 18:57:56.956 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-05 18:57:57.120629 | orchestrator | 18:57:57.120 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=4fdcba4f-f307-4d82-9e0d-06733ca3c430] 2025-06-05 18:57:57.135797 | orchestrator | 18:57:57.135 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-05 18:57:57.153586 | orchestrator | 18:57:57.153 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=1e541f1d-e373-40db-8ec6-5aa0cc8a56a8] 2025-06-05 18:57:57.161322 | orchestrator | 18:57:57.161 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-05 18:57:57.281243 | orchestrator | 18:57:57.280 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=0bbd6fc1-b3ab-4a0e-9fb9-3e80652a2721] 2025-06-05 18:57:57.290482 | orchestrator | 18:57:57.290 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-05 18:57:57.336381 | orchestrator | 18:57:57.335 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=aa474347-76d6-4e22-bfd1-05e9f60f9388] 2025-06-05 18:57:57.346208 | orchestrator | 18:57:57.345 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-05 18:57:57.477328 | orchestrator | 18:57:57.476 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=29c18a88-301a-4b97-9aa2-402bd13749e8] 2025-06-05 18:57:57.484360 | orchestrator | 18:57:57.484 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-05 18:57:57.647298 | orchestrator | 18:57:57.646 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=efdc8725-0e40-4786-ba4f-235f86f2c960] 2025-06-05 18:57:57.833646 | orchestrator | 18:57:57.833 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=6675caf4-9b36-446b-b743-2b52ec47e314] 2025-06-05 18:58:01.061633 | orchestrator | 18:58:01.061 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=3b75b404-faa7-45bf-acdb-d9324bc6c2c6] 2025-06-05 18:58:01.163882 | orchestrator | 18:58:01.163 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 5s [id=5a72bd44-44a9-4455-b290-572a36674019] 2025-06-05 18:58:02.122061 | orchestrator | 18:58:02.121 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 5s [id=fb31edf7-c24c-49fd-a8ec-b6810822a2cd] 2025-06-05 18:58:02.177392 | orchestrator | 18:58:02.176 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=74226855-117a-4604-9535-fa4bd2352ae2] 2025-06-05 18:58:02.258907 | orchestrator | 18:58:02.258 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=cb0dc8f9-716f-4655-9e2e-b52a47770132] 2025-06-05 18:58:02.314223 | orchestrator | 18:58:02.313 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 5s [id=28dcff1e-e723-4031-90b9-4e3f1ae69422] 2025-06-05 18:58:02.627666 | orchestrator | 18:58:02.627 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=0c0b7b55-2b33-4afb-8f42-672f056cd168] 2025-06-05 18:58:02.947439 | orchestrator | 18:58:02.946 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=a3498397-794f-47c6-87df-fea3217ef67a] 2025-06-05 18:58:02.973805 | orchestrator | 18:58:02.973 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-05 18:58:02.981203 | orchestrator | 18:58:02.981 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-05 18:58:02.986645 | orchestrator | 18:58:02.985 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-05 18:58:02.994530 | orchestrator | 18:58:02.994 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-05 18:58:02.999558 | orchestrator | 18:58:02.999 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-05 18:58:03.004112 | orchestrator | 18:58:03.003 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-05 18:58:03.005755 | orchestrator | 18:58:03.005 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-05 18:58:09.226922 | orchestrator | 18:58:09.226 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 6s [id=602d780a-cd9a-4155-9500-fc2a4b868e1d] 2025-06-05 18:58:09.235995 | orchestrator | 18:58:09.235 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-05 18:58:09.249980 | orchestrator | 18:58:09.249 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-05 18:58:09.250315 | orchestrator | 18:58:09.250 STDOUT terraform: local_file.inventory: Creating... 2025-06-05 18:58:09.257125 | orchestrator | 18:58:09.256 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=7a06d4c99b6a5b8baf1c975a6a4f93b828caf69c] 2025-06-05 18:58:09.257210 | orchestrator | 18:58:09.257 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=c337be7b2eac9d38f3ba7a94570848e489d96f12] 2025-06-05 18:58:09.908888 | orchestrator | 18:58:09.908 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=602d780a-cd9a-4155-9500-fc2a4b868e1d] 2025-06-05 18:58:12.982585 | orchestrator | 18:58:12.982 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-05 18:58:12.987708 | orchestrator | 18:58:12.987 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-05 18:58:12.996086 | orchestrator | 18:58:12.995 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-05 18:58:13.001473 | orchestrator | 18:58:13.001 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-05 18:58:13.005831 | orchestrator | 18:58:13.005 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-05 18:58:13.008005 | orchestrator | 18:58:13.007 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-05 18:58:22.986575 | orchestrator | 18:58:22.986 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-05 18:58:22.988482 | orchestrator | 18:58:22.988 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-05 18:58:22.996998 | orchestrator | 18:58:22.996 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-05 18:58:23.002328 | orchestrator | 18:58:23.002 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-05 18:58:23.006707 | orchestrator | 18:58:23.006 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-05 18:58:23.008992 | orchestrator | 18:58:23.008 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-05 18:58:23.458506 | orchestrator | 18:58:23.458 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=6cab0527-b586-419f-b08c-0be216cd1231] 2025-06-05 18:58:23.550816 | orchestrator | 18:58:23.550 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=d92b4bf7-f31e-4899-b890-035f58c2e853] 2025-06-05 18:58:23.616109 | orchestrator | 18:58:23.615 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=00b51a8f-be1a-4a45-8ace-41233913b3f3] 2025-06-05 18:58:23.777059 | orchestrator | 18:58:23.776 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=a1d05cd7-4bdd-4918-8486-b55bfd910ab9] 2025-06-05 18:58:32.990994 | orchestrator | 18:58:32.990 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-05 18:58:32.991133 | orchestrator | 18:58:32.990 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-05 18:58:33.907734 | orchestrator | 18:58:33.907 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=69262513-f36c-4a96-a76a-fc271651d7dc] 2025-06-05 18:58:33.994454 | orchestrator | 18:58:33.994 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=ae2f67f2-ba35-4f0d-b1d6-d5500af17903] 2025-06-05 18:58:34.009992 | orchestrator | 18:58:34.009 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-05 18:58:34.018142 | orchestrator | 18:58:34.017 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=2717932384368058832] 2025-06-05 18:58:34.024882 | orchestrator | 18:58:34.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-05 18:58:34.024934 | orchestrator | 18:58:34.024 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-05 18:58:34.025388 | orchestrator | 18:58:34.025 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-05 18:58:34.030604 | orchestrator | 18:58:34.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-05 18:58:34.037445 | orchestrator | 18:58:34.037 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-05 18:58:34.038072 | orchestrator | 18:58:34.037 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-05 18:58:34.039301 | orchestrator | 18:58:34.039 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-05 18:58:34.047612 | orchestrator | 18:58:34.047 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-05 18:58:34.049707 | orchestrator | 18:58:34.049 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-05 18:58:34.058986 | orchestrator | 18:58:34.058 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-05 18:58:39.539694 | orchestrator | 18:58:39.538 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=ae2f67f2-ba35-4f0d-b1d6-d5500af17903/9365a1ca-de8d-4d50-b195-b3372d88a766] 2025-06-05 18:58:39.542692 | orchestrator | 18:58:39.542 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=6cab0527-b586-419f-b08c-0be216cd1231/24c03cc2-b2a5-4cf8-8852-1f4dda86236b] 2025-06-05 18:58:39.573596 | orchestrator | 18:58:39.573 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=ae2f67f2-ba35-4f0d-b1d6-d5500af17903/4472eb6b-1c6e-42f9-be0b-d37693300441] 2025-06-05 18:58:39.578909 | orchestrator | 18:58:39.578 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=d92b4bf7-f31e-4899-b890-035f58c2e853/da89fb13-3694-40ae-a272-70fb90f4e55f] 2025-06-05 18:58:39.612752 | orchestrator | 18:58:39.612 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=6cab0527-b586-419f-b08c-0be216cd1231/648969e3-6dd4-4b8b-ace0-3e999cf7526e] 2025-06-05 18:58:39.625902 | orchestrator | 18:58:39.625 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=d92b4bf7-f31e-4899-b890-035f58c2e853/10a1977a-d4e6-4a8b-a76c-bb8b1466bde2] 2025-06-05 18:58:39.640031 | orchestrator | 18:58:39.639 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=ae2f67f2-ba35-4f0d-b1d6-d5500af17903/cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312] 2025-06-05 18:58:39.647064 | orchestrator | 18:58:39.646 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=6cab0527-b586-419f-b08c-0be216cd1231/cf03b960-33f8-4fd5-8bea-a02272b072d8] 2025-06-05 18:58:39.677658 | orchestrator | 18:58:39.677 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=d92b4bf7-f31e-4899-b890-035f58c2e853/50a4d034-c5f0-4330-a7d8-ab894b1f0c25] 2025-06-05 18:58:44.053039 | orchestrator | 18:58:44.052 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-05 18:58:54.053466 | orchestrator | 18:58:54.053 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-05 18:58:54.763475 | orchestrator | 18:58:54.763 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=87d439c6-dcce-4a8b-9951-d6c1569942e8] 2025-06-05 18:58:54.788169 | orchestrator | 18:58:54.787 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-05 18:58:54.788442 | orchestrator | 18:58:54.788 STDOUT terraform: Outputs: 2025-06-05 18:58:54.788694 | orchestrator | 18:58:54.788 STDOUT terraform: manager_address = 2025-06-05 18:58:54.788732 | orchestrator | 18:58:54.788 STDOUT terraform: private_key = 2025-06-05 18:58:54.956911 | orchestrator | ok: Runtime: 0:01:28.947271 2025-06-05 18:58:55.001257 | 2025-06-05 18:58:55.001449 | TASK [Fetch manager address] 2025-06-05 18:58:55.471406 | orchestrator | ok 2025-06-05 18:58:55.483920 | 2025-06-05 18:58:55.484080 | TASK [Set manager_host address] 2025-06-05 18:58:55.561427 | orchestrator | ok 2025-06-05 18:58:55.572253 | 2025-06-05 18:58:55.572417 | LOOP [Update ansible collections] 2025-06-05 18:58:56.464398 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-05 18:58:56.464741 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-05 18:58:56.464792 | orchestrator | Starting galaxy collection install process 2025-06-05 18:58:56.464824 | orchestrator | Process install dependency map 2025-06-05 18:58:56.464853 | orchestrator | Starting collection install process 2025-06-05 18:58:56.464879 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons' 2025-06-05 18:58:56.464924 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons 2025-06-05 18:58:56.464957 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-05 18:58:56.465021 | orchestrator | ok: Item: commons Runtime: 0:00:00.555379 2025-06-05 18:58:57.416033 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-05 18:58:57.416202 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-05 18:58:57.416254 | orchestrator | Starting galaxy collection install process 2025-06-05 18:58:57.416293 | orchestrator | Process install dependency map 2025-06-05 18:58:57.416329 | orchestrator | Starting collection install process 2025-06-05 18:58:57.416362 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services' 2025-06-05 18:58:57.416396 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/services 2025-06-05 18:58:57.416428 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-05 18:58:57.416478 | orchestrator | ok: Item: services Runtime: 0:00:00.681014 2025-06-05 18:58:57.441389 | 2025-06-05 18:58:57.441653 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-05 18:59:08.034593 | orchestrator | ok 2025-06-05 18:59:08.047010 | 2025-06-05 18:59:08.047164 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-05 19:00:08.095823 | orchestrator | ok 2025-06-05 19:00:08.108340 | 2025-06-05 19:00:08.108483 | TASK [Fetch manager ssh hostkey] 2025-06-05 19:00:09.691446 | orchestrator | Output suppressed because no_log was given 2025-06-05 19:00:09.705685 | 2025-06-05 19:00:09.705867 | TASK [Get ssh keypair from terraform environment] 2025-06-05 19:00:10.242513 | orchestrator | ok: Runtime: 0:00:00.010942 2025-06-05 19:00:10.258435 | 2025-06-05 19:00:10.258628 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-05 19:00:10.298870 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-05 19:00:10.308344 | 2025-06-05 19:00:10.308478 | TASK [Run manager part 0] 2025-06-05 19:00:11.163291 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-05 19:00:11.207634 | orchestrator | 2025-06-05 19:00:11.207685 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-05 19:00:11.207692 | orchestrator | 2025-06-05 19:00:11.207705 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-05 19:00:12.956155 | orchestrator | ok: [testbed-manager] 2025-06-05 19:00:12.956198 | orchestrator | 2025-06-05 19:00:12.956225 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-05 19:00:12.956237 | orchestrator | 2025-06-05 19:00:12.956247 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:00:14.617448 | orchestrator | ok: [testbed-manager] 2025-06-05 19:00:14.617481 | orchestrator | 2025-06-05 19:00:14.617487 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-05 19:00:15.161559 | orchestrator | ok: [testbed-manager] 2025-06-05 19:00:15.161604 | orchestrator | 2025-06-05 19:00:15.161615 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-05 19:00:15.191775 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.191811 | orchestrator | 2025-06-05 19:00:15.191820 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-05 19:00:15.215483 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.215519 | orchestrator | 2025-06-05 19:00:15.215527 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-05 19:00:15.235819 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.235854 | orchestrator | 2025-06-05 19:00:15.235860 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-05 19:00:15.271623 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.271666 | orchestrator | 2025-06-05 19:00:15.271675 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-05 19:00:15.301282 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.301317 | orchestrator | 2025-06-05 19:00:15.301325 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-05 19:00:15.326846 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.326881 | orchestrator | 2025-06-05 19:00:15.326890 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-05 19:00:15.349052 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:00:15.349114 | orchestrator | 2025-06-05 19:00:15.349130 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-05 19:00:16.075071 | orchestrator | changed: [testbed-manager] 2025-06-05 19:00:16.075123 | orchestrator | 2025-06-05 19:00:16.075140 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-05 19:03:17.308389 | orchestrator | changed: [testbed-manager] 2025-06-05 19:03:17.308440 | orchestrator | 2025-06-05 19:03:17.308450 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-05 19:04:28.871829 | orchestrator | changed: [testbed-manager] 2025-06-05 19:04:28.871947 | orchestrator | 2025-06-05 19:04:28.871965 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-05 19:04:51.505745 | orchestrator | changed: [testbed-manager] 2025-06-05 19:04:51.505846 | orchestrator | 2025-06-05 19:04:51.505866 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-05 19:04:59.957554 | orchestrator | changed: [testbed-manager] 2025-06-05 19:04:59.957688 | orchestrator | 2025-06-05 19:04:59.957710 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-05 19:05:00.006072 | orchestrator | ok: [testbed-manager] 2025-06-05 19:05:00.006115 | orchestrator | 2025-06-05 19:05:00.006125 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-05 19:05:00.791649 | orchestrator | ok: [testbed-manager] 2025-06-05 19:05:00.791737 | orchestrator | 2025-06-05 19:05:00.791757 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-05 19:05:01.528262 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:01.528353 | orchestrator | 2025-06-05 19:05:01.528368 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-05 19:05:07.942836 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:07.942956 | orchestrator | 2025-06-05 19:05:07.943010 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-05 19:05:13.884704 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:13.884800 | orchestrator | 2025-06-05 19:05:13.884820 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-05 19:05:16.459843 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:16.460035 | orchestrator | 2025-06-05 19:05:16.460053 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-05 19:05:18.207965 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:18.208669 | orchestrator | 2025-06-05 19:05:18.208704 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-05 19:05:19.314365 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-05 19:05:19.314463 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-05 19:05:19.314479 | orchestrator | 2025-06-05 19:05:19.314492 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-05 19:05:19.356422 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-05 19:05:19.356499 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-05 19:05:19.356513 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-05 19:05:19.356525 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-05 19:05:23.070695 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-05 19:05:23.070769 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-05 19:05:23.070786 | orchestrator | 2025-06-05 19:05:23.070800 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-05 19:05:23.640548 | orchestrator | changed: [testbed-manager] 2025-06-05 19:05:23.640589 | orchestrator | 2025-06-05 19:05:23.640597 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-05 19:06:51.063018 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-05 19:06:51.063128 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-05 19:06:51.063143 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-05 19:06:51.063153 | orchestrator | 2025-06-05 19:06:51.063163 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-05 19:06:53.360821 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-05 19:06:53.360857 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-05 19:06:53.360862 | orchestrator | 2025-06-05 19:06:53.360867 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-05 19:06:53.360872 | orchestrator | 2025-06-05 19:06:53.360876 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:06:54.738655 | orchestrator | ok: [testbed-manager] 2025-06-05 19:06:54.738748 | orchestrator | 2025-06-05 19:06:54.738769 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-05 19:06:54.783864 | orchestrator | ok: [testbed-manager] 2025-06-05 19:06:54.783924 | orchestrator | 2025-06-05 19:06:54.783931 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-05 19:06:54.844536 | orchestrator | ok: [testbed-manager] 2025-06-05 19:06:54.844595 | orchestrator | 2025-06-05 19:06:54.844603 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-05 19:06:55.598922 | orchestrator | changed: [testbed-manager] 2025-06-05 19:06:55.599012 | orchestrator | 2025-06-05 19:06:55.599032 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-05 19:06:56.320170 | orchestrator | changed: [testbed-manager] 2025-06-05 19:06:56.320312 | orchestrator | 2025-06-05 19:06:56.320332 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-05 19:06:57.626490 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-05 19:06:57.627263 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-05 19:06:57.627285 | orchestrator | 2025-06-05 19:06:57.627303 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-05 19:06:59.036305 | orchestrator | changed: [testbed-manager] 2025-06-05 19:06:59.036396 | orchestrator | 2025-06-05 19:06:59.036412 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-05 19:07:00.657031 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:07:00.657093 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-05 19:07:00.657108 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:07:00.657120 | orchestrator | 2025-06-05 19:07:00.657132 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-05 19:07:01.164920 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:01.164983 | orchestrator | 2025-06-05 19:07:01.164997 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-05 19:07:01.231918 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:01.231955 | orchestrator | 2025-06-05 19:07:01.231964 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-05 19:07:01.984805 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:07:01.984841 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:01.984850 | orchestrator | 2025-06-05 19:07:01.984858 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-05 19:07:02.021356 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:02.021395 | orchestrator | 2025-06-05 19:07:02.021405 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-05 19:07:02.054192 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:02.054251 | orchestrator | 2025-06-05 19:07:02.054264 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-05 19:07:02.086856 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:02.086886 | orchestrator | 2025-06-05 19:07:02.086893 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-05 19:07:02.136474 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:02.136510 | orchestrator | 2025-06-05 19:07:02.136519 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-05 19:07:02.808769 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:02.809425 | orchestrator | 2025-06-05 19:07:02.809453 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-05 19:07:02.809465 | orchestrator | 2025-06-05 19:07:02.809478 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:07:04.174612 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:04.174665 | orchestrator | 2025-06-05 19:07:04.174673 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-05 19:07:05.117227 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:05.117274 | orchestrator | 2025-06-05 19:07:05.117281 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:07:05.117288 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-05 19:07:05.117293 | orchestrator | 2025-06-05 19:07:05.652906 | orchestrator | ok: Runtime: 0:06:54.583420 2025-06-05 19:07:05.669226 | 2025-06-05 19:07:05.669390 | TASK [Point out that the log in on the manager is now possible] 2025-06-05 19:07:05.719164 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-05 19:07:05.729218 | 2025-06-05 19:07:05.729360 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-05 19:07:05.765620 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-05 19:07:05.777583 | 2025-06-05 19:07:05.777768 | TASK [Run manager part 1 + 2] 2025-06-05 19:07:06.670982 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-05 19:07:06.724088 | orchestrator | 2025-06-05 19:07:06.724135 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-05 19:07:06.724141 | orchestrator | 2025-06-05 19:07:06.724154 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:07:09.595909 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:09.595966 | orchestrator | 2025-06-05 19:07:09.595990 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-05 19:07:09.634710 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:09.634758 | orchestrator | 2025-06-05 19:07:09.634768 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-05 19:07:09.674843 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:09.674891 | orchestrator | 2025-06-05 19:07:09.674899 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-05 19:07:09.725410 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:09.725462 | orchestrator | 2025-06-05 19:07:09.725472 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-05 19:07:09.804618 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:09.804680 | orchestrator | 2025-06-05 19:07:09.804692 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-05 19:07:09.867563 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:09.867617 | orchestrator | 2025-06-05 19:07:09.867626 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-05 19:07:09.909086 | orchestrator | included: /home/zuul-testbed02/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-05 19:07:09.909130 | orchestrator | 2025-06-05 19:07:09.909136 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-05 19:07:10.619501 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:10.619561 | orchestrator | 2025-06-05 19:07:10.619573 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-05 19:07:10.670929 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:10.670985 | orchestrator | 2025-06-05 19:07:10.670994 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-05 19:07:12.024526 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:12.024586 | orchestrator | 2025-06-05 19:07:12.024598 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-05 19:07:12.586937 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:12.586996 | orchestrator | 2025-06-05 19:07:12.587005 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-05 19:07:13.779956 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:13.780012 | orchestrator | 2025-06-05 19:07:13.780022 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-05 19:07:25.918427 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:25.918509 | orchestrator | 2025-06-05 19:07:25.918519 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-05 19:07:26.617251 | orchestrator | ok: [testbed-manager] 2025-06-05 19:07:26.617345 | orchestrator | 2025-06-05 19:07:26.617363 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-05 19:07:26.673501 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:26.673664 | orchestrator | 2025-06-05 19:07:26.673715 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-05 19:07:27.586172 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:27.586959 | orchestrator | 2025-06-05 19:07:27.586983 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-05 19:07:28.533318 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:28.533401 | orchestrator | 2025-06-05 19:07:28.533418 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-05 19:07:29.088496 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:29.088583 | orchestrator | 2025-06-05 19:07:29.088599 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-05 19:07:29.129278 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-05 19:07:29.129382 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-05 19:07:29.129398 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-05 19:07:29.129411 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-05 19:07:31.162357 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:31.162439 | orchestrator | 2025-06-05 19:07:31.162455 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-05 19:07:39.960057 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-05 19:07:39.960101 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-05 19:07:39.960110 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-05 19:07:39.960117 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-05 19:07:39.960127 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-05 19:07:39.960133 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-05 19:07:39.960138 | orchestrator | 2025-06-05 19:07:39.960145 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-05 19:07:41.002186 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:41.002313 | orchestrator | 2025-06-05 19:07:41.002331 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-05 19:07:41.048167 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:41.048265 | orchestrator | 2025-06-05 19:07:41.048280 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-05 19:07:44.305436 | orchestrator | changed: [testbed-manager] 2025-06-05 19:07:44.306245 | orchestrator | 2025-06-05 19:07:44.306274 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-05 19:07:44.345564 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:07:44.345640 | orchestrator | 2025-06-05 19:07:44.345657 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-05 19:09:19.932657 | orchestrator | changed: [testbed-manager] 2025-06-05 19:09:19.932727 | orchestrator | 2025-06-05 19:09:19.932744 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-05 19:09:21.156937 | orchestrator | ok: [testbed-manager] 2025-06-05 19:09:21.157152 | orchestrator | 2025-06-05 19:09:21.157197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:09:21.157213 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-05 19:09:21.157225 | orchestrator | 2025-06-05 19:09:21.413512 | orchestrator | ok: Runtime: 0:02:15.136002 2025-06-05 19:09:21.422454 | 2025-06-05 19:09:21.422571 | TASK [Reboot manager] 2025-06-05 19:09:22.961024 | orchestrator | ok: Runtime: 0:00:00.971077 2025-06-05 19:09:22.978959 | 2025-06-05 19:09:22.979110 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-05 19:09:37.775348 | orchestrator | ok 2025-06-05 19:09:37.786241 | 2025-06-05 19:09:37.786495 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-05 19:10:37.828432 | orchestrator | ok 2025-06-05 19:10:37.838911 | 2025-06-05 19:10:37.839066 | TASK [Deploy manager + bootstrap nodes] 2025-06-05 19:10:40.246314 | orchestrator | 2025-06-05 19:10:40.246629 | orchestrator | # DEPLOY MANAGER 2025-06-05 19:10:40.246667 | orchestrator | 2025-06-05 19:10:40.246687 | orchestrator | + set -e 2025-06-05 19:10:40.246707 | orchestrator | + echo 2025-06-05 19:10:40.246729 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-05 19:10:40.246757 | orchestrator | + echo 2025-06-05 19:10:40.246824 | orchestrator | + cat /opt/manager-vars.sh 2025-06-05 19:10:40.250643 | orchestrator | export NUMBER_OF_NODES=6 2025-06-05 19:10:40.250697 | orchestrator | 2025-06-05 19:10:40.250710 | orchestrator | export CEPH_VERSION=reef 2025-06-05 19:10:40.250723 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-05 19:10:40.250735 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-05 19:10:40.250758 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-05 19:10:40.250768 | orchestrator | 2025-06-05 19:10:40.250795 | orchestrator | export ARA=false 2025-06-05 19:10:40.250815 | orchestrator | export DEPLOY_MODE=manager 2025-06-05 19:10:40.250832 | orchestrator | export TEMPEST=false 2025-06-05 19:10:40.250846 | orchestrator | export IS_ZUUL=true 2025-06-05 19:10:40.250861 | orchestrator | 2025-06-05 19:10:40.250878 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:10:40.250888 | orchestrator | export EXTERNAL_API=false 2025-06-05 19:10:40.250897 | orchestrator | 2025-06-05 19:10:40.250907 | orchestrator | export IMAGE_USER=ubuntu 2025-06-05 19:10:40.250920 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-05 19:10:40.250930 | orchestrator | 2025-06-05 19:10:40.250940 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-05 19:10:40.251163 | orchestrator | 2025-06-05 19:10:40.251188 | orchestrator | + echo 2025-06-05 19:10:40.251205 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 19:10:40.252177 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 19:10:40.252258 | orchestrator | ++ INTERACTIVE=false 2025-06-05 19:10:40.252275 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 19:10:40.252291 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 19:10:40.252819 | orchestrator | + source /opt/manager-vars.sh 2025-06-05 19:10:40.252861 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-05 19:10:40.252875 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-05 19:10:40.252885 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-05 19:10:40.252894 | orchestrator | ++ CEPH_VERSION=reef 2025-06-05 19:10:40.252904 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-05 19:10:40.252914 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-05 19:10:40.252924 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 19:10:40.252935 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 19:10:40.252950 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-05 19:10:40.252969 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-05 19:10:40.252982 | orchestrator | ++ export ARA=false 2025-06-05 19:10:40.252992 | orchestrator | ++ ARA=false 2025-06-05 19:10:40.253008 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-05 19:10:40.253018 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-05 19:10:40.253069 | orchestrator | ++ export TEMPEST=false 2025-06-05 19:10:40.253081 | orchestrator | ++ TEMPEST=false 2025-06-05 19:10:40.253091 | orchestrator | ++ export IS_ZUUL=true 2025-06-05 19:10:40.253101 | orchestrator | ++ IS_ZUUL=true 2025-06-05 19:10:40.253282 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:10:40.253298 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:10:40.253309 | orchestrator | ++ export EXTERNAL_API=false 2025-06-05 19:10:40.253318 | orchestrator | ++ EXTERNAL_API=false 2025-06-05 19:10:40.253331 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-05 19:10:40.253341 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-05 19:10:40.253351 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-05 19:10:40.253402 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-05 19:10:40.253421 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-05 19:10:40.253430 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-05 19:10:40.253602 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-05 19:10:40.306631 | orchestrator | + docker version 2025-06-05 19:10:40.553844 | orchestrator | Client: Docker Engine - Community 2025-06-05 19:10:40.553949 | orchestrator | Version: 27.5.1 2025-06-05 19:10:40.553967 | orchestrator | API version: 1.47 2025-06-05 19:10:40.553978 | orchestrator | Go version: go1.22.11 2025-06-05 19:10:40.553989 | orchestrator | Git commit: 9f9e405 2025-06-05 19:10:40.554000 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-05 19:10:40.554071 | orchestrator | OS/Arch: linux/amd64 2025-06-05 19:10:40.554087 | orchestrator | Context: default 2025-06-05 19:10:40.554098 | orchestrator | 2025-06-05 19:10:40.554110 | orchestrator | Server: Docker Engine - Community 2025-06-05 19:10:40.554121 | orchestrator | Engine: 2025-06-05 19:10:40.554133 | orchestrator | Version: 27.5.1 2025-06-05 19:10:40.554186 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-05 19:10:40.554239 | orchestrator | Go version: go1.22.11 2025-06-05 19:10:40.554252 | orchestrator | Git commit: 4c9b3b0 2025-06-05 19:10:40.554263 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-05 19:10:40.554274 | orchestrator | OS/Arch: linux/amd64 2025-06-05 19:10:40.554285 | orchestrator | Experimental: false 2025-06-05 19:10:40.554296 | orchestrator | containerd: 2025-06-05 19:10:40.554307 | orchestrator | Version: 1.7.27 2025-06-05 19:10:40.554318 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-05 19:10:40.554330 | orchestrator | runc: 2025-06-05 19:10:40.554341 | orchestrator | Version: 1.2.5 2025-06-05 19:10:40.554351 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-05 19:10:40.554362 | orchestrator | docker-init: 2025-06-05 19:10:40.554373 | orchestrator | Version: 0.19.0 2025-06-05 19:10:40.554385 | orchestrator | GitCommit: de40ad0 2025-06-05 19:10:40.556820 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-05 19:10:40.564496 | orchestrator | + set -e 2025-06-05 19:10:40.564545 | orchestrator | + source /opt/manager-vars.sh 2025-06-05 19:10:40.564560 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-05 19:10:40.564572 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-05 19:10:40.564583 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-05 19:10:40.564594 | orchestrator | ++ CEPH_VERSION=reef 2025-06-05 19:10:40.564606 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-05 19:10:40.564617 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-05 19:10:40.564628 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 19:10:40.564640 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 19:10:40.564651 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-05 19:10:40.564662 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-05 19:10:40.564672 | orchestrator | ++ export ARA=false 2025-06-05 19:10:40.564684 | orchestrator | ++ ARA=false 2025-06-05 19:10:40.564695 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-05 19:10:40.564705 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-05 19:10:40.564716 | orchestrator | ++ export TEMPEST=false 2025-06-05 19:10:40.564727 | orchestrator | ++ TEMPEST=false 2025-06-05 19:10:40.564738 | orchestrator | ++ export IS_ZUUL=true 2025-06-05 19:10:40.564748 | orchestrator | ++ IS_ZUUL=true 2025-06-05 19:10:40.564759 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:10:40.564771 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:10:40.564782 | orchestrator | ++ export EXTERNAL_API=false 2025-06-05 19:10:40.564792 | orchestrator | ++ EXTERNAL_API=false 2025-06-05 19:10:40.564803 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-05 19:10:40.564814 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-05 19:10:40.564825 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-05 19:10:40.564835 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-05 19:10:40.564846 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-05 19:10:40.564857 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-05 19:10:40.564868 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 19:10:40.564879 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 19:10:40.564889 | orchestrator | ++ INTERACTIVE=false 2025-06-05 19:10:40.564900 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 19:10:40.564915 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 19:10:40.564926 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-05 19:10:40.564937 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-05 19:10:40.572752 | orchestrator | + set -e 2025-06-05 19:10:40.572866 | orchestrator | + VERSION=9.1.0 2025-06-05 19:10:40.572887 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-05 19:10:40.579401 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-05 19:10:40.579461 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-05 19:10:40.583082 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-05 19:10:40.587851 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-05 19:10:40.596178 | orchestrator | + set -e 2025-06-05 19:10:40.596270 | orchestrator | /opt/configuration ~ 2025-06-05 19:10:40.596288 | orchestrator | + pushd /opt/configuration 2025-06-05 19:10:40.596300 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-05 19:10:40.597655 | orchestrator | + source /opt/venv/bin/activate 2025-06-05 19:10:40.599508 | orchestrator | ++ deactivate nondestructive 2025-06-05 19:10:40.599533 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:40.599548 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:40.599585 | orchestrator | ++ hash -r 2025-06-05 19:10:40.599597 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:40.599608 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-05 19:10:40.599618 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-05 19:10:40.599630 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-05 19:10:40.599641 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-05 19:10:40.599651 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-05 19:10:40.599662 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-05 19:10:40.599673 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-05 19:10:40.599685 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:10:40.599697 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:10:40.599708 | orchestrator | ++ export PATH 2025-06-05 19:10:40.599719 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:40.599730 | orchestrator | ++ '[' -z '' ']' 2025-06-05 19:10:40.599741 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-05 19:10:40.599752 | orchestrator | ++ PS1='(venv) ' 2025-06-05 19:10:40.599762 | orchestrator | ++ export PS1 2025-06-05 19:10:40.599773 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-05 19:10:40.599784 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-05 19:10:40.599795 | orchestrator | ++ hash -r 2025-06-05 19:10:40.599806 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-05 19:10:41.643230 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-05 19:10:41.644132 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-06-05 19:10:41.645507 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-05 19:10:41.646728 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-05 19:10:41.647896 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-05 19:10:41.657934 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-05 19:10:41.659245 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-05 19:10:41.660327 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-05 19:10:41.661450 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-05 19:10:41.693231 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-05 19:10:41.694896 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-05 19:10:41.696322 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-06-05 19:10:41.697843 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-06-05 19:10:41.701923 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-05 19:10:41.906678 | orchestrator | ++ which gilt 2025-06-05 19:10:41.911385 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-05 19:10:41.911466 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-05 19:10:42.153295 | orchestrator | osism.cfg-generics: 2025-06-05 19:10:42.311545 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-05 19:10:42.311670 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-05 19:10:42.311747 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-05 19:10:42.312059 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-05 19:10:43.291784 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-05 19:10:43.303162 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-05 19:10:43.658500 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-05 19:10:43.714807 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-05 19:10:43.714902 | orchestrator | + deactivate 2025-06-05 19:10:43.714921 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-05 19:10:43.714928 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:10:43.714933 | orchestrator | + export PATH 2025-06-05 19:10:43.714937 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-05 19:10:43.714941 | orchestrator | + '[' -n '' ']' 2025-06-05 19:10:43.714947 | orchestrator | + hash -r 2025-06-05 19:10:43.714951 | orchestrator | + '[' -n '' ']' 2025-06-05 19:10:43.714955 | orchestrator | + unset VIRTUAL_ENV 2025-06-05 19:10:43.714959 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-05 19:10:43.714963 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-05 19:10:43.714967 | orchestrator | + unset -f deactivate 2025-06-05 19:10:43.714978 | orchestrator | ~ 2025-06-05 19:10:43.714982 | orchestrator | + popd 2025-06-05 19:10:43.717089 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-05 19:10:43.717099 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-05 19:10:43.718106 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-05 19:10:43.777865 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-05 19:10:43.777990 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-05 19:10:43.778014 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-05 19:10:43.877274 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-05 19:10:43.877384 | orchestrator | + source /opt/venv/bin/activate 2025-06-05 19:10:43.877426 | orchestrator | ++ deactivate nondestructive 2025-06-05 19:10:43.877440 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:43.877460 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:43.877489 | orchestrator | ++ hash -r 2025-06-05 19:10:43.877501 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:43.877512 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-05 19:10:43.877523 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-05 19:10:43.877535 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-05 19:10:43.877547 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-05 19:10:43.877558 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-05 19:10:43.877569 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-05 19:10:43.877585 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-05 19:10:43.877913 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:10:43.877947 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:10:43.877988 | orchestrator | ++ export PATH 2025-06-05 19:10:43.878000 | orchestrator | ++ '[' -n '' ']' 2025-06-05 19:10:43.878067 | orchestrator | ++ '[' -z '' ']' 2025-06-05 19:10:43.878082 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-05 19:10:43.878094 | orchestrator | ++ PS1='(venv) ' 2025-06-05 19:10:43.878105 | orchestrator | ++ export PS1 2025-06-05 19:10:43.878116 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-05 19:10:43.878127 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-05 19:10:43.878173 | orchestrator | ++ hash -r 2025-06-05 19:10:43.878192 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-05 19:10:44.949480 | orchestrator | 2025-06-05 19:10:44.949597 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-05 19:10:44.949615 | orchestrator | 2025-06-05 19:10:44.949628 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-05 19:10:45.506357 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:45.506470 | orchestrator | 2025-06-05 19:10:45.506488 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-05 19:10:46.494759 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:46.494867 | orchestrator | 2025-06-05 19:10:46.494885 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-05 19:10:46.494899 | orchestrator | 2025-06-05 19:10:46.494911 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:10:48.749849 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:48.749966 | orchestrator | 2025-06-05 19:10:48.749991 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-05 19:10:48.806710 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:48.806786 | orchestrator | 2025-06-05 19:10:48.806799 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-05 19:10:49.278158 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:49.278276 | orchestrator | 2025-06-05 19:10:49.278296 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-05 19:10:49.311887 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:10:49.311976 | orchestrator | 2025-06-05 19:10:49.311992 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-05 19:10:49.650579 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:49.650676 | orchestrator | 2025-06-05 19:10:49.650689 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-05 19:10:49.705239 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:10:49.705325 | orchestrator | 2025-06-05 19:10:49.705340 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-05 19:10:50.058195 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:50.058302 | orchestrator | 2025-06-05 19:10:50.058319 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-05 19:10:50.176873 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:10:50.176972 | orchestrator | 2025-06-05 19:10:50.176986 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-05 19:10:50.176999 | orchestrator | 2025-06-05 19:10:50.177010 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:10:52.966453 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:52.966556 | orchestrator | 2025-06-05 19:10:52.966572 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-05 19:10:53.054604 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-05 19:10:53.054705 | orchestrator | 2025-06-05 19:10:53.054728 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-05 19:10:53.111175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-05 19:10:53.111266 | orchestrator | 2025-06-05 19:10:53.111279 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-05 19:10:54.195670 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-05 19:10:54.195767 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-05 19:10:54.195786 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-05 19:10:54.195798 | orchestrator | 2025-06-05 19:10:54.195810 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-05 19:10:55.969198 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-05 19:10:55.969298 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-05 19:10:55.969310 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-05 19:10:55.969320 | orchestrator | 2025-06-05 19:10:55.969330 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-05 19:10:56.593601 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:10:56.593724 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:56.593751 | orchestrator | 2025-06-05 19:10:56.593771 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-05 19:10:57.217221 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:10:57.217322 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:57.217337 | orchestrator | 2025-06-05 19:10:57.217349 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-05 19:10:57.272363 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:10:57.272438 | orchestrator | 2025-06-05 19:10:57.272452 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-05 19:10:57.636165 | orchestrator | ok: [testbed-manager] 2025-06-05 19:10:57.636260 | orchestrator | 2025-06-05 19:10:57.636274 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-05 19:10:57.715338 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-05 19:10:57.715448 | orchestrator | 2025-06-05 19:10:57.715463 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-05 19:10:58.809078 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:58.809217 | orchestrator | 2025-06-05 19:10:58.809234 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-05 19:10:59.609508 | orchestrator | changed: [testbed-manager] 2025-06-05 19:10:59.609608 | orchestrator | 2025-06-05 19:10:59.609624 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-05 19:11:11.022506 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:11.022591 | orchestrator | 2025-06-05 19:11:11.022612 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-05 19:11:11.072904 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:11:11.072961 | orchestrator | 2025-06-05 19:11:11.072967 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-05 19:11:11.072972 | orchestrator | 2025-06-05 19:11:11.072977 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:11:12.858217 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:12.858298 | orchestrator | 2025-06-05 19:11:12.858309 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-05 19:11:12.968310 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-05 19:11:12.968376 | orchestrator | 2025-06-05 19:11:12.968382 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-05 19:11:13.022461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-05 19:11:13.022504 | orchestrator | 2025-06-05 19:11:13.022510 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-05 19:11:15.491807 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:15.491885 | orchestrator | 2025-06-05 19:11:15.491892 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-05 19:11:15.537775 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:15.537804 | orchestrator | 2025-06-05 19:11:15.537811 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-05 19:11:15.669142 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-05 19:11:15.669211 | orchestrator | 2025-06-05 19:11:15.669218 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-05 19:11:18.434601 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-05 19:11:18.434725 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-05 19:11:18.434742 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-05 19:11:18.434755 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-05 19:11:18.434767 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-05 19:11:18.434778 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-05 19:11:18.434790 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-05 19:11:18.434801 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-05 19:11:18.434813 | orchestrator | 2025-06-05 19:11:18.434827 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-05 19:11:19.079013 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:19.079148 | orchestrator | 2025-06-05 19:11:19.079167 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-05 19:11:19.727708 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:19.727804 | orchestrator | 2025-06-05 19:11:19.727819 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-05 19:11:19.813390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-05 19:11:19.813499 | orchestrator | 2025-06-05 19:11:19.813516 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-05 19:11:21.010007 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-05 19:11:21.010199 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-05 19:11:21.010215 | orchestrator | 2025-06-05 19:11:21.010229 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-05 19:11:21.632038 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:21.632179 | orchestrator | 2025-06-05 19:11:21.632196 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-05 19:11:21.693509 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:11:21.693618 | orchestrator | 2025-06-05 19:11:21.693635 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-05 19:11:21.766498 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-05 19:11:21.766601 | orchestrator | 2025-06-05 19:11:21.766615 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-05 19:11:23.058983 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:11:23.059109 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:11:23.059180 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:23.059203 | orchestrator | 2025-06-05 19:11:23.059224 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-05 19:11:23.685953 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:23.686118 | orchestrator | 2025-06-05 19:11:23.686185 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-05 19:11:23.734333 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:11:23.734437 | orchestrator | 2025-06-05 19:11:23.734453 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-05 19:11:23.834386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-05 19:11:23.834492 | orchestrator | 2025-06-05 19:11:23.834508 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-05 19:11:24.383174 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:24.383279 | orchestrator | 2025-06-05 19:11:24.383297 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-05 19:11:24.799811 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:24.799920 | orchestrator | 2025-06-05 19:11:24.799936 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-05 19:11:25.966755 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-05 19:11:25.966866 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-05 19:11:25.966881 | orchestrator | 2025-06-05 19:11:25.966895 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-05 19:11:26.604862 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:26.604973 | orchestrator | 2025-06-05 19:11:26.604989 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-05 19:11:27.014918 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:27.015026 | orchestrator | 2025-06-05 19:11:27.015042 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-05 19:11:27.361723 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:27.361830 | orchestrator | 2025-06-05 19:11:27.361846 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-05 19:11:27.401525 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:11:27.401619 | orchestrator | 2025-06-05 19:11:27.401632 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-05 19:11:27.462990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-05 19:11:27.463096 | orchestrator | 2025-06-05 19:11:27.463148 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-05 19:11:27.514919 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:27.515013 | orchestrator | 2025-06-05 19:11:27.515026 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-05 19:11:29.477069 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-05 19:11:29.477259 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-05 19:11:29.477277 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-05 19:11:29.478086 | orchestrator | 2025-06-05 19:11:29.478109 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-05 19:11:30.160819 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:30.160922 | orchestrator | 2025-06-05 19:11:30.160936 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-05 19:11:30.866351 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:30.866459 | orchestrator | 2025-06-05 19:11:30.866475 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-05 19:11:31.562522 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:31.562631 | orchestrator | 2025-06-05 19:11:31.562646 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-05 19:11:31.637487 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-05 19:11:31.637554 | orchestrator | 2025-06-05 19:11:31.637567 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-05 19:11:31.675839 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:31.675908 | orchestrator | 2025-06-05 19:11:31.675922 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-05 19:11:32.357535 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-05 19:11:32.357623 | orchestrator | 2025-06-05 19:11:32.357634 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-05 19:11:32.438446 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-05 19:11:32.438535 | orchestrator | 2025-06-05 19:11:32.438548 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-05 19:11:33.130842 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:33.130946 | orchestrator | 2025-06-05 19:11:33.130962 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-05 19:11:33.720426 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:33.720530 | orchestrator | 2025-06-05 19:11:33.720546 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-05 19:11:33.776897 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:11:33.776980 | orchestrator | 2025-06-05 19:11:33.776994 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-05 19:11:33.829405 | orchestrator | ok: [testbed-manager] 2025-06-05 19:11:33.829496 | orchestrator | 2025-06-05 19:11:33.829510 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-05 19:11:34.640205 | orchestrator | changed: [testbed-manager] 2025-06-05 19:11:34.640347 | orchestrator | 2025-06-05 19:11:34.640364 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-05 19:12:36.310815 | orchestrator | changed: [testbed-manager] 2025-06-05 19:12:36.310924 | orchestrator | 2025-06-05 19:12:36.310940 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-05 19:12:37.351197 | orchestrator | ok: [testbed-manager] 2025-06-05 19:12:37.351317 | orchestrator | 2025-06-05 19:12:37.351332 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-05 19:12:37.408749 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:12:37.408844 | orchestrator | 2025-06-05 19:12:37.408858 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-05 19:12:40.193670 | orchestrator | changed: [testbed-manager] 2025-06-05 19:12:40.193835 | orchestrator | 2025-06-05 19:12:40.193854 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-05 19:12:40.256800 | orchestrator | ok: [testbed-manager] 2025-06-05 19:12:40.256903 | orchestrator | 2025-06-05 19:12:40.256917 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-05 19:12:40.256929 | orchestrator | 2025-06-05 19:12:40.256941 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-05 19:12:40.311485 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:12:40.311595 | orchestrator | 2025-06-05 19:12:40.311642 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-05 19:13:40.360488 | orchestrator | Pausing for 60 seconds 2025-06-05 19:13:40.360593 | orchestrator | changed: [testbed-manager] 2025-06-05 19:13:40.360605 | orchestrator | 2025-06-05 19:13:40.360615 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-05 19:13:44.362432 | orchestrator | changed: [testbed-manager] 2025-06-05 19:13:44.362537 | orchestrator | 2025-06-05 19:13:44.362553 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-05 19:14:25.935572 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-05 19:14:25.935694 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-05 19:14:25.935709 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:25.935722 | orchestrator | 2025-06-05 19:14:25.935735 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-05 19:14:34.275076 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:34.275171 | orchestrator | 2025-06-05 19:14:34.275198 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-05 19:14:34.366713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-05 19:14:34.366838 | orchestrator | 2025-06-05 19:14:34.366895 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-05 19:14:34.366919 | orchestrator | 2025-06-05 19:14:34.366941 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-05 19:14:34.418249 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:14:34.418357 | orchestrator | 2025-06-05 19:14:34.418374 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:14:34.418389 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-05 19:14:34.418400 | orchestrator | 2025-06-05 19:14:34.516814 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-05 19:14:34.516909 | orchestrator | + deactivate 2025-06-05 19:14:34.516916 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-05 19:14:34.516923 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-05 19:14:34.516927 | orchestrator | + export PATH 2025-06-05 19:14:34.516934 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-05 19:14:34.516939 | orchestrator | + '[' -n '' ']' 2025-06-05 19:14:34.516943 | orchestrator | + hash -r 2025-06-05 19:14:34.516947 | orchestrator | + '[' -n '' ']' 2025-06-05 19:14:34.516951 | orchestrator | + unset VIRTUAL_ENV 2025-06-05 19:14:34.516955 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-05 19:14:34.516959 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-05 19:14:34.516963 | orchestrator | + unset -f deactivate 2025-06-05 19:14:34.516967 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-05 19:14:34.523317 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-05 19:14:34.523361 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-05 19:14:34.523367 | orchestrator | + local max_attempts=60 2025-06-05 19:14:34.523372 | orchestrator | + local name=ceph-ansible 2025-06-05 19:14:34.523377 | orchestrator | + local attempt_num=1 2025-06-05 19:14:34.524017 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:14:34.567254 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:14:34.567319 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-05 19:14:34.567363 | orchestrator | + local max_attempts=60 2025-06-05 19:14:34.567371 | orchestrator | + local name=kolla-ansible 2025-06-05 19:14:34.567376 | orchestrator | + local attempt_num=1 2025-06-05 19:14:34.568529 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-05 19:14:34.611010 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:14:34.611114 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-05 19:14:34.611131 | orchestrator | + local max_attempts=60 2025-06-05 19:14:34.611143 | orchestrator | + local name=osism-ansible 2025-06-05 19:14:34.611155 | orchestrator | + local attempt_num=1 2025-06-05 19:14:34.611276 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-05 19:14:34.651361 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:14:34.651444 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-05 19:14:34.651455 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-05 19:14:35.374352 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-05 19:14:35.571212 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-05 19:14:35.571312 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571330 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571342 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-05 19:14:35.571356 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-05 19:14:35.571367 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571378 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571389 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-06-05 19:14:35.571399 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571410 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-05 19:14:35.571421 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571432 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-05 19:14:35.571443 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571454 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.571464 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-05 19:14:35.581907 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-05 19:14:35.627758 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-05 19:14:35.627831 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-05 19:14:35.632494 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-05 19:14:37.350334 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:14:37.350466 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:14:37.350484 | orchestrator | Registering Redlock._release_script 2025-06-05 19:14:37.555375 | orchestrator | 2025-06-05 19:14:37 | INFO  | Task 8f20c56e-c9b3-4c39-9712-f4a40322667d (resolvconf) was prepared for execution. 2025-06-05 19:14:37.555468 | orchestrator | 2025-06-05 19:14:37 | INFO  | It takes a moment until task 8f20c56e-c9b3-4c39-9712-f4a40322667d (resolvconf) has been started and output is visible here. 2025-06-05 19:14:41.311526 | orchestrator | 2025-06-05 19:14:41.311746 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-05 19:14:41.313020 | orchestrator | 2025-06-05 19:14:41.314727 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:14:41.315723 | orchestrator | Thursday 05 June 2025 19:14:41 +0000 (0:00:00.131) 0:00:00.131 ********* 2025-06-05 19:14:44.720050 | orchestrator | ok: [testbed-manager] 2025-06-05 19:14:44.721042 | orchestrator | 2025-06-05 19:14:44.721617 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-05 19:14:44.722229 | orchestrator | Thursday 05 June 2025 19:14:44 +0000 (0:00:03.409) 0:00:03.540 ********* 2025-06-05 19:14:44.765327 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:14:44.765388 | orchestrator | 2025-06-05 19:14:44.765560 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-05 19:14:44.766429 | orchestrator | Thursday 05 June 2025 19:14:44 +0000 (0:00:00.047) 0:00:03.587 ********* 2025-06-05 19:14:44.834250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-05 19:14:44.835074 | orchestrator | 2025-06-05 19:14:44.835484 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-05 19:14:44.836415 | orchestrator | Thursday 05 June 2025 19:14:44 +0000 (0:00:00.068) 0:00:03.656 ********* 2025-06-05 19:14:44.908565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-05 19:14:44.908641 | orchestrator | 2025-06-05 19:14:44.909059 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-05 19:14:44.909588 | orchestrator | Thursday 05 June 2025 19:14:44 +0000 (0:00:00.073) 0:00:03.730 ********* 2025-06-05 19:14:45.754252 | orchestrator | ok: [testbed-manager] 2025-06-05 19:14:45.754459 | orchestrator | 2025-06-05 19:14:45.754586 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-05 19:14:45.755103 | orchestrator | Thursday 05 June 2025 19:14:45 +0000 (0:00:00.843) 0:00:04.574 ********* 2025-06-05 19:14:45.809771 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:14:45.810494 | orchestrator | 2025-06-05 19:14:45.811882 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-05 19:14:45.813560 | orchestrator | Thursday 05 June 2025 19:14:45 +0000 (0:00:00.057) 0:00:04.631 ********* 2025-06-05 19:14:46.227164 | orchestrator | ok: [testbed-manager] 2025-06-05 19:14:46.227347 | orchestrator | 2025-06-05 19:14:46.228285 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-05 19:14:46.229109 | orchestrator | Thursday 05 June 2025 19:14:46 +0000 (0:00:00.417) 0:00:05.049 ********* 2025-06-05 19:14:46.298218 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:14:46.298664 | orchestrator | 2025-06-05 19:14:46.299259 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-05 19:14:46.300515 | orchestrator | Thursday 05 June 2025 19:14:46 +0000 (0:00:00.070) 0:00:05.119 ********* 2025-06-05 19:14:46.751146 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:46.752186 | orchestrator | 2025-06-05 19:14:46.752788 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-05 19:14:46.754404 | orchestrator | Thursday 05 June 2025 19:14:46 +0000 (0:00:00.453) 0:00:05.573 ********* 2025-06-05 19:14:47.631723 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:47.632018 | orchestrator | 2025-06-05 19:14:47.632761 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-05 19:14:47.633577 | orchestrator | Thursday 05 June 2025 19:14:47 +0000 (0:00:00.880) 0:00:06.453 ********* 2025-06-05 19:14:48.603656 | orchestrator | ok: [testbed-manager] 2025-06-05 19:14:48.604504 | orchestrator | 2025-06-05 19:14:48.606253 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-05 19:14:48.606414 | orchestrator | Thursday 05 June 2025 19:14:48 +0000 (0:00:00.969) 0:00:07.423 ********* 2025-06-05 19:14:48.678633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-05 19:14:48.678728 | orchestrator | 2025-06-05 19:14:48.679825 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-05 19:14:48.680737 | orchestrator | Thursday 05 June 2025 19:14:48 +0000 (0:00:00.076) 0:00:07.500 ********* 2025-06-05 19:14:49.746791 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:49.747058 | orchestrator | 2025-06-05 19:14:49.750782 | orchestrator | 2025-06-05 19:14:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:14:49.750888 | orchestrator | 2025-06-05 19:14:49 | INFO  | Please wait and do not abort execution. 2025-06-05 19:14:49.750967 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:14:49.751474 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:14:49.753356 | orchestrator | 2025-06-05 19:14:49.754104 | orchestrator | 2025-06-05 19:14:49.754810 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:14:49.755707 | orchestrator | Thursday 05 June 2025 19:14:49 +0000 (0:00:01.067) 0:00:08.567 ********* 2025-06-05 19:14:49.756292 | orchestrator | =============================================================================== 2025-06-05 19:14:49.757018 | orchestrator | Gathering Facts --------------------------------------------------------- 3.41s 2025-06-05 19:14:49.757542 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.07s 2025-06-05 19:14:49.758317 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.97s 2025-06-05 19:14:49.758881 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.88s 2025-06-05 19:14:49.759346 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.84s 2025-06-05 19:14:49.759904 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.45s 2025-06-05 19:14:49.760334 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.42s 2025-06-05 19:14:49.760766 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-05 19:14:49.761340 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-06-05 19:14:49.761823 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-06-05 19:14:49.762244 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-06-05 19:14:49.762920 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-06-05 19:14:49.762944 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.05s 2025-06-05 19:14:50.174985 | orchestrator | + osism apply sshconfig 2025-06-05 19:14:51.797392 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:14:51.797497 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:14:51.797513 | orchestrator | Registering Redlock._release_script 2025-06-05 19:14:51.851467 | orchestrator | 2025-06-05 19:14:51 | INFO  | Task a5082383-d980-448c-b581-ce0c5c761809 (sshconfig) was prepared for execution. 2025-06-05 19:14:51.851570 | orchestrator | 2025-06-05 19:14:51 | INFO  | It takes a moment until task a5082383-d980-448c-b581-ce0c5c761809 (sshconfig) has been started and output is visible here. 2025-06-05 19:14:55.822896 | orchestrator | 2025-06-05 19:14:55.824033 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-05 19:14:55.824964 | orchestrator | 2025-06-05 19:14:55.826449 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-05 19:14:55.828468 | orchestrator | Thursday 05 June 2025 19:14:55 +0000 (0:00:00.173) 0:00:00.173 ********* 2025-06-05 19:14:56.422611 | orchestrator | ok: [testbed-manager] 2025-06-05 19:14:56.422713 | orchestrator | 2025-06-05 19:14:56.422727 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-05 19:14:56.422741 | orchestrator | Thursday 05 June 2025 19:14:56 +0000 (0:00:00.604) 0:00:00.777 ********* 2025-06-05 19:14:56.924628 | orchestrator | changed: [testbed-manager] 2025-06-05 19:14:56.926809 | orchestrator | 2025-06-05 19:14:56.926962 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-05 19:14:56.926980 | orchestrator | Thursday 05 June 2025 19:14:56 +0000 (0:00:00.501) 0:00:01.278 ********* 2025-06-05 19:15:02.073150 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-05 19:15:02.073273 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-05 19:15:02.073664 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-05 19:15:02.074319 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-05 19:15:02.074947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-05 19:15:02.075424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-05 19:15:02.075973 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-05 19:15:02.076398 | orchestrator | 2025-06-05 19:15:02.076929 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-05 19:15:02.077387 | orchestrator | Thursday 05 June 2025 19:15:02 +0000 (0:00:05.149) 0:00:06.427 ********* 2025-06-05 19:15:02.122761 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:15:02.122864 | orchestrator | 2025-06-05 19:15:02.123644 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-05 19:15:02.124589 | orchestrator | Thursday 05 June 2025 19:15:02 +0000 (0:00:00.051) 0:00:06.479 ********* 2025-06-05 19:15:02.632590 | orchestrator | changed: [testbed-manager] 2025-06-05 19:15:02.633488 | orchestrator | 2025-06-05 19:15:02.635028 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:15:02.635070 | orchestrator | 2025-06-05 19:15:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:15:02.635083 | orchestrator | 2025-06-05 19:15:02 | INFO  | Please wait and do not abort execution. 2025-06-05 19:15:02.636606 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:15:02.637203 | orchestrator | 2025-06-05 19:15:02.638310 | orchestrator | 2025-06-05 19:15:02.638943 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:15:02.639766 | orchestrator | Thursday 05 June 2025 19:15:02 +0000 (0:00:00.509) 0:00:06.989 ********* 2025-06-05 19:15:02.640322 | orchestrator | =============================================================================== 2025-06-05 19:15:02.641052 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.15s 2025-06-05 19:15:02.641690 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-06-05 19:15:02.642708 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.51s 2025-06-05 19:15:02.643722 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-06-05 19:15:02.643929 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.05s 2025-06-05 19:15:02.972061 | orchestrator | + osism apply known-hosts 2025-06-05 19:15:04.456504 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:15:04.456619 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:15:04.456634 | orchestrator | Registering Redlock._release_script 2025-06-05 19:15:04.507249 | orchestrator | 2025-06-05 19:15:04 | INFO  | Task f2b659ea-76bc-4d6d-83f6-032257bfd5cf (known-hosts) was prepared for execution. 2025-06-05 19:15:04.507327 | orchestrator | 2025-06-05 19:15:04 | INFO  | It takes a moment until task f2b659ea-76bc-4d6d-83f6-032257bfd5cf (known-hosts) has been started and output is visible here. 2025-06-05 19:15:07.428544 | orchestrator | 2025-06-05 19:15:07.429200 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-05 19:15:07.430072 | orchestrator | 2025-06-05 19:15:07.430959 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-05 19:15:07.432242 | orchestrator | Thursday 05 June 2025 19:15:07 +0000 (0:00:00.121) 0:00:00.121 ********* 2025-06-05 19:15:13.197833 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-05 19:15:13.199633 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-05 19:15:13.199693 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-05 19:15:13.200389 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-05 19:15:13.200961 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-05 19:15:13.201940 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-05 19:15:13.205321 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-05 19:15:13.205430 | orchestrator | 2025-06-05 19:15:13.208253 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-05 19:15:13.208481 | orchestrator | Thursday 05 June 2025 19:15:13 +0000 (0:00:05.766) 0:00:05.888 ********* 2025-06-05 19:15:13.358310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-05 19:15:13.358772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-05 19:15:13.359952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-05 19:15:13.360442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-05 19:15:13.361058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-05 19:15:13.362219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-05 19:15:13.362914 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-05 19:15:13.363567 | orchestrator | 2025-06-05 19:15:13.363959 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:13.364442 | orchestrator | Thursday 05 June 2025 19:15:13 +0000 (0:00:00.163) 0:00:06.051 ********* 2025-06-05 19:15:14.492066 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIERvVXAVfLwj/iLMem8dyKMNW4879eZ1uFFjABWmORHX) 2025-06-05 19:15:14.492270 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIdHf6qxQExnxde64dxBKPxn4jcuRfD0roZ7EVsGsinHuet6q+WcPNMh5ZxQJhVRseQjWNW8gjMb9uSR1iIYoyaiQZXGIUGdFdcAArGc636V3Nq9XIJZ5fYAxXnhrEcFPxNBXRrv9m4Kwe0OHE2KwCHsP1TDKus7KkAPsB80Q+G6PwKvWCmTb76uT+XOZ6wybyLWTUHDbWJjiQ8UA8ly42c2HRBMGVwkkGFj86ebLlmrkXSsSX8D7T0uS68e6orL1lB6oGGubxyq2+g6Almm/pHiSP9qWXaKCOE8AMTgqf3HQ519aNeClADK094G99ljBIJDBgTC36FY4Hxp3dpRxYofqTiLebZVASvV7OPNpPenD2usRPgoIzry1k42yGNjowl3Pb7RuXfO7j79VpuXoq4x1G0o9Vq8/OZQHYcys4ZKJJUb2Nm4H1dRkRQmjjrux9/PMA35YipeFAfF4B23NcXU916EGQswHFPwl2EzOwKhSH8esSYqMA4wB418xCHs=) 2025-06-05 19:15:14.492410 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNs855LZcLqjjNSG/aLKMuMNs+TqPWUI5bb2nx+k2pbfQ5BOzr6+q7rlyz5fol2iCf3MrLy2BPfX4P5HqhvUR4I=) 2025-06-05 19:15:14.496850 | orchestrator | 2025-06-05 19:15:14.497731 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:14.498305 | orchestrator | Thursday 05 June 2025 19:15:14 +0000 (0:00:01.131) 0:00:07.183 ********* 2025-06-05 19:15:15.514477 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+ed0+S62MQ6xIqSrC8aiKUOVGp6pZ6UECj92uY8tAJ) 2025-06-05 19:15:15.515905 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBOaPX8iaWceUXgsYWZEwfbsdymzOUuj8pLf6IoSYv0imvesn08clrgn0Bc02GUQqFa31+QL+0FeHOCDHvSA5xT32uH2q3OQuIKfsPJxPPlELnjcFOmk1MkgwSU0HfuPta9Q/URpXVpJ0aHbKJkxeURhLe9rj988IhLWkeHFaqsyct7fenmF3aWEtDYaVs+CbR77TFEYenmxjiCIXWrDLrQ+i30GU/QKytAckiKJ1fLd7nRG33XtH0iRz+9+Y2vB9nW6BlLYBA0NIuvUS0o7if9oSSkFvYGdD8Mf647tXTOrjH+zKHjopmTo5VS4ADAZSkMre6zgvpvMVur5gKc3jDOdZUrVaEhPKa22WBVHV6+zIhCTpPnASFeDxfvAAytdpNzzrhhBSiCEPzIpviOwRUDH6KFY1XVxuiSBi4XC1iMCI5clDhyz59eTqyBoudl0mnf+g+ed6d79GksJue/M+lTiZn3k4dNOswV8TpE2F82GrMZOWrPYFk2qfF/14DQQE=) 2025-06-05 19:15:15.516750 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKwGD/tC5tkveczmh6yFmo8RIdY+IcmpeGZocmKMQZYsz+t334gHSWGHTEkuOFQf0GQPnCgxBHwRkSwlOCaXS0=) 2025-06-05 19:15:15.517429 | orchestrator | 2025-06-05 19:15:15.518386 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:15.519013 | orchestrator | Thursday 05 June 2025 19:15:15 +0000 (0:00:01.022) 0:00:08.206 ********* 2025-06-05 19:15:16.538139 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeMIyOFI6sm6hectsQEp65kE2/QDcAGuMAsMB/FogPXFEuOu69C4ZSraPnU/oaQxb2HlPHHUfOW/1qNiHnywrS5X0GeZYZbPEKiFYDOefnjzxu+Md0f3VgIDkQ1EEz2JjUms27bQVYUdoEInGBCbydNOg9xU4zhehlx1Xj29z/v7ajajKUcgRXqegTs9v+0KdPrWVk82Nfl1Bh9FF12xPu/dra2cFJJVLAr+rt9RGsLdkWMrLTXeqX+JpJ1iLnni1qRKsSny9DynlHlmPiuRwWKOk5W7CPy/2wkk/ggGPjf4Mc/CrugqGewuz+vXOYRRS5BDjdb4STnzcQkYvqu/T/98wEMJWvjbRzkEe53s+ojbjjvqmZiN371M90UjRPGrGBlIsUJJ4ev1TUhd/pqC16HraajysmVQ1EO14XikENvYk/Ies+7hnjg7wTi1V+A1CWNRPNcwQDR3Mh+hviO493Rom36BWxXK0hvGxIxRJTcsJCd1+UoMpZKjldYsRwNuc=) 2025-06-05 19:15:16.538948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDqDc+Vx/PM5tpDLQxSA9FvsTWm0mMKCqijwWlufjEc0O4erTiurOis1re8WYA4erP4qvVer4FEWtoEeCL+Lzqs=) 2025-06-05 19:15:16.539948 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVpuIvugyDGaqgiRZp5GJ618OFn7ehAsE7sDpPyIcPa) 2025-06-05 19:15:16.541054 | orchestrator | 2025-06-05 19:15:16.541728 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:16.542544 | orchestrator | Thursday 05 June 2025 19:15:16 +0000 (0:00:01.023) 0:00:09.230 ********* 2025-06-05 19:15:17.586790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlz6ZXLBYCBG8foWfBpWChk3odznDFfGgfl3V8UQ22gGIyoMvBczxQQ29WBpl8JmjsQ+moSpg8Z1dMwHxHcjb4rjc6+FX+SVvDzBTFionN0/nZ9/jeVIT0PSgz41sE38A+qtQ6pqH4T9VdqzrnDgSiis7HDuXpH6DcGzMlPk6BrygzcLNwzks8FC8SrPDCsxjnkAR5SuMXoq3EATZ50gshSIfTAXL1uwF0INdzcb8rh3GZvwqqV2JVr1BK/kCwPpJE6AEIdtK/2CZnusEL5lbafoFSsX2xWf0QOqIFVEKmHfoBh8N+uJYEr//IADJ5tF/TYIuq+dHxzkQF4FKEkxaJ4LseE2LXXSLE+9IL2N+ConXPx3Rtp1qHFe1DxplEFKvN3lTClknb0AL4vpKjcjBVAMGtIxez11ytej1K7IGJYH/uPZfgEEEtjDdSBCn6jGkJA5bI3/o5TYrsiwqOR8wDOz0y1zxQ9GvG7htFGRYgph0CtNwzSaWle87r7muH8qU=) 2025-06-05 19:15:17.587315 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCScVfTAMfWXWEZafAumf+Giyt6D9opqwFvm2ycSJCBW/Cz9z03ToSRzDdHW8zkSnglypTPgC8b9rIEHFSP0ZgY=) 2025-06-05 19:15:17.588353 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrcMB0rvl7pc/lArcyJCekkvjPq08QadWQjgEzVSXrv) 2025-06-05 19:15:17.588899 | orchestrator | 2025-06-05 19:15:17.589532 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:17.590087 | orchestrator | Thursday 05 June 2025 19:15:17 +0000 (0:00:01.048) 0:00:10.278 ********* 2025-06-05 19:15:18.602495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfmNHmtmnT/TQTq7DdZNNeD7iFPQB60Jthh8dWxnSIhPRcX5uT6AZMx6ogDJBb74e7I+0p+HeS15EK3wzygecZqfAvEiAUJD7R1fPg++4Wd9FJ8qcBQFMt7Qum44BYFDVC4it7GKBiq1ZiQGuLvL3PyCcqTJhz3bEQGmiF6XWWmt63lj3heWaBaaCT0ksHs1rcz5gR3MuyHVXXjF3RcXKYflfHeCUmsTn9Fhdb0nDcTMQ1MsmDcJZl+oHNVtYnOUqHKH1mJuQ9iiTMISWYqKkb/wmBdihH/ahT+SlCbw9xLZ8GTs1JefG3GzIdwsjQiDq7KJetUoDuAWCRuQRPjZ3S322ztP5RQjnyfKMugB3qOh2+tutFA+XmLSQxOPlSRiS5EgPpxXdNxIdDHWdNXJ1KlgR3XXSCH6LNEABaZ0Zek+Ffl1mwJ3LIIy8yU/UhfYefH15dkF1eT9Wn1eTdqyJ4GTK8pgSVyqgKfZ5WwMH7ZqKstnxrIoKZ2LhyDoDfSnU=) 2025-06-05 19:15:18.603456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQzwMv8PM7eoP7kr2cQrsnpkFg5zHz+kxNPusE1hL57sBUks2GBqEiSn8f7ChleT89Uo9r91oyZ2cDPCAvID9E=) 2025-06-05 19:15:18.604776 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII2rJqQBOFvTebMzS5Ic6LrdYgeu3XQ+Na8uXtGwoQwx) 2025-06-05 19:15:18.605976 | orchestrator | 2025-06-05 19:15:18.608502 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:18.609633 | orchestrator | Thursday 05 June 2025 19:15:18 +0000 (0:00:01.015) 0:00:11.294 ********* 2025-06-05 19:15:19.647493 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI0JMEfipAmLXbcxZ7zviwiUb385NT/Sp58abf/vJkUbAlsfP3JC9QpN8OPy0tlpx9CnzKmoQwVJGkmcmUeR2ak=) 2025-06-05 19:15:19.647616 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDwRg2LcrCx5mQtlOPm6j1cUnCeQb55D4KBZ6osEC51D3ihuAATURYPezNzN9jD9IBazLSrXeqBL4FV5l9xQDPfP5b8R+H/5NbBxsHwLjK/VPuSb2L27Ie36HdLCWwhTinxTicoV3woncueOdYyDmcLfSm+1nfPHjm2sgOpd0tRyqigCnv6JWOcuEL4o0uaPhwbUWqaK9RZLvTPiAh3/cENJhxg59OecaUHXG1ZMYmSbjZqU+QBnv4OdshodCCPuawM3cimDdwJnt0mBnUc8EbCS41G9EuYHXa7tvWU6tNPStgERJIoJ9nfG4zjTn+Oqbm/Ai2D2kUl/Wz7mgX+mhcxjQXdVcuNvJmxp+cP24OXQ8MVn4niTYTdhJEsUzbLO0pJRmUUzlbVUeNHlDs4sQZxkdBq3NDDaUAkM1sJersVuIda5Xddo5rx3xaTta7v+sDbiQLcr+I2K79WX78Ge/reELoareYMnlm5z+AWVKhPzYuvCJmwCV43KBAsoCqkb8U=) 2025-06-05 19:15:19.647630 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8oHB70faeEYvOj6irACNUAh9yqMlBLA3qHNIf1GWc/) 2025-06-05 19:15:19.647683 | orchestrator | 2025-06-05 19:15:19.648598 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:19.648921 | orchestrator | Thursday 05 June 2025 19:15:19 +0000 (0:00:01.044) 0:00:12.338 ********* 2025-06-05 19:15:20.672954 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLElgyIjXO+UT/WJutAaQJaAK0bKD0eYPNmgG/afuxcYz+vph8th407scw5KbXbcoPn95nw/W6jz992mkmdqskg=) 2025-06-05 19:15:20.673077 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQzFEWLpmfmSDsWEOLaxFXQforQhgufmQe5BZMj+7GiIHiVuN+MPJ+nxdomm3B0R0jtrWl43DGSbmNQK3jyjhsyBi+AAOLvNkDoHkr2ByWJkyU2RUEQewOe3a7Q0vUPdcCnIwqPuVdG4Pg9wlR2KDWEAtsGRVOrgZVQSGKwY8BThzqAJU9hY09Nzbay1T4+CJtDPTO0WvN8d7LRO2uGllFWcxnihq9eBu3qaNH6jwugE93z1pfLauq/PW2Q/3pwEyS6y3mnytK7dwMCVOyVz2s9VkFlwoWKRn4HgrkL9+aT0zN8/mB70beIVyFnVTTgi5UjLo5qSin9B1svGI1wqx32ZDhVMRunDLqjgoAF6HsxYhjD+mfe7IxWFj56x8a5KDObb6OyI8DdyOwUYSAvTUQZh8B0a68qloRwh6azlJBUGB1pwE0u8KvR6JD73CCBWn9/4WpbYrsMebxVY/c5IBYnlMShL9iOmUx9GV/YwTvKA8L+uSWVq52kFatm9KDu8s=) 2025-06-05 19:15:20.673875 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID3mYn8cvuSI2i9vrQRKXGkE/nVRJzqAK22UN58vl6W2) 2025-06-05 19:15:20.674996 | orchestrator | 2025-06-05 19:15:20.675384 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-05 19:15:20.676139 | orchestrator | Thursday 05 June 2025 19:15:20 +0000 (0:00:01.027) 0:00:13.366 ********* 2025-06-05 19:15:25.826715 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-05 19:15:25.827000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-05 19:15:25.827025 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-05 19:15:25.828126 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-05 19:15:25.829119 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-05 19:15:25.830316 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-05 19:15:25.830845 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-05 19:15:25.832777 | orchestrator | 2025-06-05 19:15:25.833256 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-05 19:15:25.833673 | orchestrator | Thursday 05 June 2025 19:15:25 +0000 (0:00:05.152) 0:00:18.518 ********* 2025-06-05 19:15:25.998694 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-05 19:15:25.999189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-05 19:15:26.001552 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-05 19:15:26.002310 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-05 19:15:26.003690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-05 19:15:26.004688 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-05 19:15:26.005093 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-05 19:15:26.005696 | orchestrator | 2025-06-05 19:15:26.006530 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:26.008303 | orchestrator | Thursday 05 June 2025 19:15:25 +0000 (0:00:00.173) 0:00:18.692 ********* 2025-06-05 19:15:27.028623 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIERvVXAVfLwj/iLMem8dyKMNW4879eZ1uFFjABWmORHX) 2025-06-05 19:15:27.028736 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCxIdHf6qxQExnxde64dxBKPxn4jcuRfD0roZ7EVsGsinHuet6q+WcPNMh5ZxQJhVRseQjWNW8gjMb9uSR1iIYoyaiQZXGIUGdFdcAArGc636V3Nq9XIJZ5fYAxXnhrEcFPxNBXRrv9m4Kwe0OHE2KwCHsP1TDKus7KkAPsB80Q+G6PwKvWCmTb76uT+XOZ6wybyLWTUHDbWJjiQ8UA8ly42c2HRBMGVwkkGFj86ebLlmrkXSsSX8D7T0uS68e6orL1lB6oGGubxyq2+g6Almm/pHiSP9qWXaKCOE8AMTgqf3HQ519aNeClADK094G99ljBIJDBgTC36FY4Hxp3dpRxYofqTiLebZVASvV7OPNpPenD2usRPgoIzry1k42yGNjowl3Pb7RuXfO7j79VpuXoq4x1G0o9Vq8/OZQHYcys4ZKJJUb2Nm4H1dRkRQmjjrux9/PMA35YipeFAfF4B23NcXU916EGQswHFPwl2EzOwKhSH8esSYqMA4wB418xCHs=) 2025-06-05 19:15:27.029825 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNs855LZcLqjjNSG/aLKMuMNs+TqPWUI5bb2nx+k2pbfQ5BOzr6+q7rlyz5fol2iCf3MrLy2BPfX4P5HqhvUR4I=) 2025-06-05 19:15:27.030752 | orchestrator | 2025-06-05 19:15:27.031216 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:27.032121 | orchestrator | Thursday 05 June 2025 19:15:27 +0000 (0:00:01.028) 0:00:19.720 ********* 2025-06-05 19:15:28.014565 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHKwGD/tC5tkveczmh6yFmo8RIdY+IcmpeGZocmKMQZYsz+t334gHSWGHTEkuOFQf0GQPnCgxBHwRkSwlOCaXS0=) 2025-06-05 19:15:28.015516 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBOaPX8iaWceUXgsYWZEwfbsdymzOUuj8pLf6IoSYv0imvesn08clrgn0Bc02GUQqFa31+QL+0FeHOCDHvSA5xT32uH2q3OQuIKfsPJxPPlELnjcFOmk1MkgwSU0HfuPta9Q/URpXVpJ0aHbKJkxeURhLe9rj988IhLWkeHFaqsyct7fenmF3aWEtDYaVs+CbR77TFEYenmxjiCIXWrDLrQ+i30GU/QKytAckiKJ1fLd7nRG33XtH0iRz+9+Y2vB9nW6BlLYBA0NIuvUS0o7if9oSSkFvYGdD8Mf647tXTOrjH+zKHjopmTo5VS4ADAZSkMre6zgvpvMVur5gKc3jDOdZUrVaEhPKa22WBVHV6+zIhCTpPnASFeDxfvAAytdpNzzrhhBSiCEPzIpviOwRUDH6KFY1XVxuiSBi4XC1iMCI5clDhyz59eTqyBoudl0mnf+g+ed6d79GksJue/M+lTiZn3k4dNOswV8TpE2F82GrMZOWrPYFk2qfF/14DQQE=) 2025-06-05 19:15:28.015997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJ+ed0+S62MQ6xIqSrC8aiKUOVGp6pZ6UECj92uY8tAJ) 2025-06-05 19:15:28.016987 | orchestrator | 2025-06-05 19:15:28.017298 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:28.017912 | orchestrator | Thursday 05 June 2025 19:15:28 +0000 (0:00:00.986) 0:00:20.707 ********* 2025-06-05 19:15:29.039995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCeMIyOFI6sm6hectsQEp65kE2/QDcAGuMAsMB/FogPXFEuOu69C4ZSraPnU/oaQxb2HlPHHUfOW/1qNiHnywrS5X0GeZYZbPEKiFYDOefnjzxu+Md0f3VgIDkQ1EEz2JjUms27bQVYUdoEInGBCbydNOg9xU4zhehlx1Xj29z/v7ajajKUcgRXqegTs9v+0KdPrWVk82Nfl1Bh9FF12xPu/dra2cFJJVLAr+rt9RGsLdkWMrLTXeqX+JpJ1iLnni1qRKsSny9DynlHlmPiuRwWKOk5W7CPy/2wkk/ggGPjf4Mc/CrugqGewuz+vXOYRRS5BDjdb4STnzcQkYvqu/T/98wEMJWvjbRzkEe53s+ojbjjvqmZiN371M90UjRPGrGBlIsUJJ4ev1TUhd/pqC16HraajysmVQ1EO14XikENvYk/Ies+7hnjg7wTi1V+A1CWNRPNcwQDR3Mh+hviO493Rom36BWxXK0hvGxIxRJTcsJCd1+UoMpZKjldYsRwNuc=) 2025-06-05 19:15:29.040488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDqDc+Vx/PM5tpDLQxSA9FvsTWm0mMKCqijwWlufjEc0O4erTiurOis1re8WYA4erP4qvVer4FEWtoEeCL+Lzqs=) 2025-06-05 19:15:29.041474 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVpuIvugyDGaqgiRZp5GJ618OFn7ehAsE7sDpPyIcPa) 2025-06-05 19:15:29.042376 | orchestrator | 2025-06-05 19:15:29.042920 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:29.043522 | orchestrator | Thursday 05 June 2025 19:15:29 +0000 (0:00:01.024) 0:00:21.731 ********* 2025-06-05 19:15:30.086075 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDlz6ZXLBYCBG8foWfBpWChk3odznDFfGgfl3V8UQ22gGIyoMvBczxQQ29WBpl8JmjsQ+moSpg8Z1dMwHxHcjb4rjc6+FX+SVvDzBTFionN0/nZ9/jeVIT0PSgz41sE38A+qtQ6pqH4T9VdqzrnDgSiis7HDuXpH6DcGzMlPk6BrygzcLNwzks8FC8SrPDCsxjnkAR5SuMXoq3EATZ50gshSIfTAXL1uwF0INdzcb8rh3GZvwqqV2JVr1BK/kCwPpJE6AEIdtK/2CZnusEL5lbafoFSsX2xWf0QOqIFVEKmHfoBh8N+uJYEr//IADJ5tF/TYIuq+dHxzkQF4FKEkxaJ4LseE2LXXSLE+9IL2N+ConXPx3Rtp1qHFe1DxplEFKvN3lTClknb0AL4vpKjcjBVAMGtIxez11ytej1K7IGJYH/uPZfgEEEtjDdSBCn6jGkJA5bI3/o5TYrsiwqOR8wDOz0y1zxQ9GvG7htFGRYgph0CtNwzSaWle87r7muH8qU=) 2025-06-05 19:15:30.086384 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCScVfTAMfWXWEZafAumf+Giyt6D9opqwFvm2ycSJCBW/Cz9z03ToSRzDdHW8zkSnglypTPgC8b9rIEHFSP0ZgY=) 2025-06-05 19:15:30.087331 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDrcMB0rvl7pc/lArcyJCekkvjPq08QadWQjgEzVSXrv) 2025-06-05 19:15:30.088300 | orchestrator | 2025-06-05 19:15:30.088882 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:30.089518 | orchestrator | Thursday 05 June 2025 19:15:30 +0000 (0:00:01.046) 0:00:22.778 ********* 2025-06-05 19:15:31.122378 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNQzwMv8PM7eoP7kr2cQrsnpkFg5zHz+kxNPusE1hL57sBUks2GBqEiSn8f7ChleT89Uo9r91oyZ2cDPCAvID9E=) 2025-06-05 19:15:31.123852 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCfmNHmtmnT/TQTq7DdZNNeD7iFPQB60Jthh8dWxnSIhPRcX5uT6AZMx6ogDJBb74e7I+0p+HeS15EK3wzygecZqfAvEiAUJD7R1fPg++4Wd9FJ8qcBQFMt7Qum44BYFDVC4it7GKBiq1ZiQGuLvL3PyCcqTJhz3bEQGmiF6XWWmt63lj3heWaBaaCT0ksHs1rcz5gR3MuyHVXXjF3RcXKYflfHeCUmsTn9Fhdb0nDcTMQ1MsmDcJZl+oHNVtYnOUqHKH1mJuQ9iiTMISWYqKkb/wmBdihH/ahT+SlCbw9xLZ8GTs1JefG3GzIdwsjQiDq7KJetUoDuAWCRuQRPjZ3S322ztP5RQjnyfKMugB3qOh2+tutFA+XmLSQxOPlSRiS5EgPpxXdNxIdDHWdNXJ1KlgR3XXSCH6LNEABaZ0Zek+Ffl1mwJ3LIIy8yU/UhfYefH15dkF1eT9Wn1eTdqyJ4GTK8pgSVyqgKfZ5WwMH7ZqKstnxrIoKZ2LhyDoDfSnU=) 2025-06-05 19:15:31.124842 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAII2rJqQBOFvTebMzS5Ic6LrdYgeu3XQ+Na8uXtGwoQwx) 2025-06-05 19:15:31.125816 | orchestrator | 2025-06-05 19:15:31.126448 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:31.127019 | orchestrator | Thursday 05 June 2025 19:15:31 +0000 (0:00:01.036) 0:00:23.815 ********* 2025-06-05 19:15:32.150609 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI0JMEfipAmLXbcxZ7zviwiUb385NT/Sp58abf/vJkUbAlsfP3JC9QpN8OPy0tlpx9CnzKmoQwVJGkmcmUeR2ak=) 2025-06-05 19:15:32.150718 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDwRg2LcrCx5mQtlOPm6j1cUnCeQb55D4KBZ6osEC51D3ihuAATURYPezNzN9jD9IBazLSrXeqBL4FV5l9xQDPfP5b8R+H/5NbBxsHwLjK/VPuSb2L27Ie36HdLCWwhTinxTicoV3woncueOdYyDmcLfSm+1nfPHjm2sgOpd0tRyqigCnv6JWOcuEL4o0uaPhwbUWqaK9RZLvTPiAh3/cENJhxg59OecaUHXG1ZMYmSbjZqU+QBnv4OdshodCCPuawM3cimDdwJnt0mBnUc8EbCS41G9EuYHXa7tvWU6tNPStgERJIoJ9nfG4zjTn+Oqbm/Ai2D2kUl/Wz7mgX+mhcxjQXdVcuNvJmxp+cP24OXQ8MVn4niTYTdhJEsUzbLO0pJRmUUzlbVUeNHlDs4sQZxkdBq3NDDaUAkM1sJersVuIda5Xddo5rx3xaTta7v+sDbiQLcr+I2K79WX78Ge/reELoareYMnlm5z+AWVKhPzYuvCJmwCV43KBAsoCqkb8U=) 2025-06-05 19:15:32.151629 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8oHB70faeEYvOj6irACNUAh9yqMlBLA3qHNIf1GWc/) 2025-06-05 19:15:32.151892 | orchestrator | 2025-06-05 19:15:32.152038 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-05 19:15:32.152731 | orchestrator | Thursday 05 June 2025 19:15:32 +0000 (0:00:01.028) 0:00:24.843 ********* 2025-06-05 19:15:33.217659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID3mYn8cvuSI2i9vrQRKXGkE/nVRJzqAK22UN58vl6W2) 2025-06-05 19:15:33.217857 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQzFEWLpmfmSDsWEOLaxFXQforQhgufmQe5BZMj+7GiIHiVuN+MPJ+nxdomm3B0R0jtrWl43DGSbmNQK3jyjhsyBi+AAOLvNkDoHkr2ByWJkyU2RUEQewOe3a7Q0vUPdcCnIwqPuVdG4Pg9wlR2KDWEAtsGRVOrgZVQSGKwY8BThzqAJU9hY09Nzbay1T4+CJtDPTO0WvN8d7LRO2uGllFWcxnihq9eBu3qaNH6jwugE93z1pfLauq/PW2Q/3pwEyS6y3mnytK7dwMCVOyVz2s9VkFlwoWKRn4HgrkL9+aT0zN8/mB70beIVyFnVTTgi5UjLo5qSin9B1svGI1wqx32ZDhVMRunDLqjgoAF6HsxYhjD+mfe7IxWFj56x8a5KDObb6OyI8DdyOwUYSAvTUQZh8B0a68qloRwh6azlJBUGB1pwE0u8KvR6JD73CCBWn9/4WpbYrsMebxVY/c5IBYnlMShL9iOmUx9GV/YwTvKA8L+uSWVq52kFatm9KDu8s=) 2025-06-05 19:15:33.218464 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLElgyIjXO+UT/WJutAaQJaAK0bKD0eYPNmgG/afuxcYz+vph8th407scw5KbXbcoPn95nw/W6jz992mkmdqskg=) 2025-06-05 19:15:33.219294 | orchestrator | 2025-06-05 19:15:33.219909 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-05 19:15:33.220333 | orchestrator | Thursday 05 June 2025 19:15:33 +0000 (0:00:01.066) 0:00:25.910 ********* 2025-06-05 19:15:33.370865 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-05 19:15:33.370976 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-05 19:15:33.372176 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-05 19:15:33.373070 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-05 19:15:33.373664 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-05 19:15:33.374334 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-05 19:15:33.374976 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-05 19:15:33.375895 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:15:33.376910 | orchestrator | 2025-06-05 19:15:33.377447 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-05 19:15:33.378066 | orchestrator | Thursday 05 June 2025 19:15:33 +0000 (0:00:00.153) 0:00:26.063 ********* 2025-06-05 19:15:33.426677 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:15:33.427130 | orchestrator | 2025-06-05 19:15:33.427867 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-05 19:15:33.428386 | orchestrator | Thursday 05 June 2025 19:15:33 +0000 (0:00:00.057) 0:00:26.121 ********* 2025-06-05 19:15:33.486477 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:15:33.487444 | orchestrator | 2025-06-05 19:15:33.487542 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-05 19:15:33.488582 | orchestrator | Thursday 05 June 2025 19:15:33 +0000 (0:00:00.059) 0:00:26.180 ********* 2025-06-05 19:15:34.013506 | orchestrator | changed: [testbed-manager] 2025-06-05 19:15:34.013895 | orchestrator | 2025-06-05 19:15:34.015153 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:15:34.015205 | orchestrator | 2025-06-05 19:15:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:15:34.015535 | orchestrator | 2025-06-05 19:15:34 | INFO  | Please wait and do not abort execution. 2025-06-05 19:15:34.016202 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:15:34.016916 | orchestrator | 2025-06-05 19:15:34.017165 | orchestrator | 2025-06-05 19:15:34.017814 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:15:34.018604 | orchestrator | Thursday 05 June 2025 19:15:34 +0000 (0:00:00.527) 0:00:26.708 ********* 2025-06-05 19:15:34.019225 | orchestrator | =============================================================================== 2025-06-05 19:15:34.019620 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.77s 2025-06-05 19:15:34.020526 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.15s 2025-06-05 19:15:34.021328 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2025-06-05 19:15:34.021799 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-05 19:15:34.022560 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-05 19:15:34.022971 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-05 19:15:34.023386 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-05 19:15:34.023846 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-05 19:15:34.024238 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-05 19:15:34.024913 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-05 19:15:34.025278 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-05 19:15:34.025594 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-05 19:15:34.025945 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-05 19:15:34.026519 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-05 19:15:34.027365 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-05 19:15:34.027728 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-05 19:15:34.028058 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.53s 2025-06-05 19:15:34.028369 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-06-05 19:15:34.028959 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-06-05 19:15:34.029379 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-06-05 19:15:34.486194 | orchestrator | + osism apply squid 2025-06-05 19:15:36.121482 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:15:36.121581 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:15:36.121595 | orchestrator | Registering Redlock._release_script 2025-06-05 19:15:36.177683 | orchestrator | 2025-06-05 19:15:36 | INFO  | Task e7c1cc3f-c38e-415b-b80b-c4b195eb2438 (squid) was prepared for execution. 2025-06-05 19:15:36.177869 | orchestrator | 2025-06-05 19:15:36 | INFO  | It takes a moment until task e7c1cc3f-c38e-415b-b80b-c4b195eb2438 (squid) has been started and output is visible here. 2025-06-05 19:15:39.836994 | orchestrator | 2025-06-05 19:15:39.837278 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-05 19:15:39.837306 | orchestrator | 2025-06-05 19:15:39.837465 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-05 19:15:39.838524 | orchestrator | Thursday 05 June 2025 19:15:39 +0000 (0:00:00.123) 0:00:00.123 ********* 2025-06-05 19:15:39.914884 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-05 19:15:39.915426 | orchestrator | 2025-06-05 19:15:39.916271 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-05 19:15:39.916507 | orchestrator | Thursday 05 June 2025 19:15:39 +0000 (0:00:00.079) 0:00:00.202 ********* 2025-06-05 19:15:40.985509 | orchestrator | ok: [testbed-manager] 2025-06-05 19:15:40.986825 | orchestrator | 2025-06-05 19:15:40.988141 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-05 19:15:40.988606 | orchestrator | Thursday 05 June 2025 19:15:40 +0000 (0:00:01.069) 0:00:01.272 ********* 2025-06-05 19:15:42.028488 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-05 19:15:42.029475 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-05 19:15:42.030439 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-05 19:15:42.030933 | orchestrator | 2025-06-05 19:15:42.031796 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-05 19:15:42.032402 | orchestrator | Thursday 05 June 2025 19:15:42 +0000 (0:00:01.043) 0:00:02.315 ********* 2025-06-05 19:15:42.963044 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-05 19:15:42.963195 | orchestrator | 2025-06-05 19:15:42.963644 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-05 19:15:42.964230 | orchestrator | Thursday 05 June 2025 19:15:42 +0000 (0:00:00.933) 0:00:03.249 ********* 2025-06-05 19:15:43.288593 | orchestrator | ok: [testbed-manager] 2025-06-05 19:15:43.288689 | orchestrator | 2025-06-05 19:15:43.288868 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-05 19:15:43.291297 | orchestrator | Thursday 05 June 2025 19:15:43 +0000 (0:00:00.325) 0:00:03.575 ********* 2025-06-05 19:15:44.168234 | orchestrator | changed: [testbed-manager] 2025-06-05 19:15:44.168418 | orchestrator | 2025-06-05 19:15:44.169512 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-05 19:15:44.169913 | orchestrator | Thursday 05 June 2025 19:15:44 +0000 (0:00:00.878) 0:00:04.454 ********* 2025-06-05 19:16:15.218552 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-05 19:16:15.218684 | orchestrator | ok: [testbed-manager] 2025-06-05 19:16:15.218702 | orchestrator | 2025-06-05 19:16:15.218900 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-05 19:16:15.220428 | orchestrator | Thursday 05 June 2025 19:16:15 +0000 (0:00:31.044) 0:00:35.499 ********* 2025-06-05 19:16:27.735098 | orchestrator | changed: [testbed-manager] 2025-06-05 19:16:27.735269 | orchestrator | 2025-06-05 19:16:27.736165 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-05 19:16:27.738267 | orchestrator | Thursday 05 June 2025 19:16:27 +0000 (0:00:12.520) 0:00:48.019 ********* 2025-06-05 19:17:27.821187 | orchestrator | Pausing for 60 seconds 2025-06-05 19:17:27.821295 | orchestrator | changed: [testbed-manager] 2025-06-05 19:17:27.821313 | orchestrator | 2025-06-05 19:17:27.821433 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-05 19:17:27.822794 | orchestrator | Thursday 05 June 2025 19:17:27 +0000 (0:01:00.081) 0:01:48.101 ********* 2025-06-05 19:17:27.893202 | orchestrator | ok: [testbed-manager] 2025-06-05 19:17:27.893367 | orchestrator | 2025-06-05 19:17:27.893963 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-05 19:17:27.894564 | orchestrator | Thursday 05 June 2025 19:17:27 +0000 (0:00:00.078) 0:01:48.179 ********* 2025-06-05 19:17:28.568196 | orchestrator | changed: [testbed-manager] 2025-06-05 19:17:28.568752 | orchestrator | 2025-06-05 19:17:28.569804 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:17:28.570554 | orchestrator | 2025-06-05 19:17:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:17:28.570580 | orchestrator | 2025-06-05 19:17:28 | INFO  | Please wait and do not abort execution. 2025-06-05 19:17:28.571688 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:17:28.572280 | orchestrator | 2025-06-05 19:17:28.573280 | orchestrator | 2025-06-05 19:17:28.573470 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:17:28.574304 | orchestrator | Thursday 05 June 2025 19:17:28 +0000 (0:00:00.675) 0:01:48.855 ********* 2025-06-05 19:17:28.574931 | orchestrator | =============================================================================== 2025-06-05 19:17:28.575499 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-05 19:17:28.576306 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.04s 2025-06-05 19:17:28.576710 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.52s 2025-06-05 19:17:28.577742 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.07s 2025-06-05 19:17:28.578295 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.04s 2025-06-05 19:17:28.579040 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2025-06-05 19:17:28.579875 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.88s 2025-06-05 19:17:28.580115 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.68s 2025-06-05 19:17:28.580984 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.33s 2025-06-05 19:17:28.581366 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-06-05 19:17:28.582129 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-06-05 19:17:29.034833 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-05 19:17:29.034934 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-05 19:17:29.039490 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-05 19:17:29.102130 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-05 19:17:29.103104 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-05 19:17:30.734803 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:17:30.734883 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:17:30.734890 | orchestrator | Registering Redlock._release_script 2025-06-05 19:17:30.789780 | orchestrator | 2025-06-05 19:17:30 | INFO  | Task 5ed56f13-a79f-4f31-8177-64bfb07c0810 (operator) was prepared for execution. 2025-06-05 19:17:30.789868 | orchestrator | 2025-06-05 19:17:30 | INFO  | It takes a moment until task 5ed56f13-a79f-4f31-8177-64bfb07c0810 (operator) has been started and output is visible here. 2025-06-05 19:17:34.593545 | orchestrator | 2025-06-05 19:17:34.593668 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-05 19:17:34.593686 | orchestrator | 2025-06-05 19:17:34.593698 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 19:17:34.595079 | orchestrator | Thursday 05 June 2025 19:17:34 +0000 (0:00:00.113) 0:00:00.113 ********* 2025-06-05 19:17:37.826464 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:17:37.827416 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:17:37.828972 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:17:37.829939 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:37.830845 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:37.831805 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:37.832832 | orchestrator | 2025-06-05 19:17:37.833617 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-05 19:17:37.834350 | orchestrator | Thursday 05 June 2025 19:17:37 +0000 (0:00:03.236) 0:00:03.349 ********* 2025-06-05 19:17:38.554825 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:38.554929 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:17:38.554944 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:17:38.555969 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:38.556459 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:17:38.557282 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:38.557936 | orchestrator | 2025-06-05 19:17:38.559624 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-05 19:17:38.559983 | orchestrator | 2025-06-05 19:17:38.560430 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-05 19:17:38.561020 | orchestrator | Thursday 05 June 2025 19:17:38 +0000 (0:00:00.726) 0:00:04.076 ********* 2025-06-05 19:17:38.621172 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:17:38.643146 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:17:38.664168 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:17:38.705581 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:38.705763 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:38.706177 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:38.707113 | orchestrator | 2025-06-05 19:17:38.708184 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-05 19:17:38.708813 | orchestrator | Thursday 05 June 2025 19:17:38 +0000 (0:00:00.152) 0:00:04.228 ********* 2025-06-05 19:17:38.766677 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:17:38.788268 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:17:38.811379 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:17:38.848824 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:38.849190 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:38.853163 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:38.853911 | orchestrator | 2025-06-05 19:17:38.854345 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-05 19:17:38.855026 | orchestrator | Thursday 05 June 2025 19:17:38 +0000 (0:00:00.144) 0:00:04.373 ********* 2025-06-05 19:17:39.461956 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:39.462513 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:39.464158 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:39.464759 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:39.465646 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:39.466381 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:39.467437 | orchestrator | 2025-06-05 19:17:39.467975 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-05 19:17:39.468801 | orchestrator | Thursday 05 June 2025 19:17:39 +0000 (0:00:00.612) 0:00:04.985 ********* 2025-06-05 19:17:40.271507 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:40.271814 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:40.273109 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:40.275720 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:40.276317 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:40.277125 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:40.278059 | orchestrator | 2025-06-05 19:17:40.278256 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-05 19:17:40.278821 | orchestrator | Thursday 05 June 2025 19:17:40 +0000 (0:00:00.807) 0:00:05.793 ********* 2025-06-05 19:17:41.411299 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-05 19:17:41.411446 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-05 19:17:41.411555 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-05 19:17:41.412926 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-05 19:17:41.413659 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-05 19:17:41.414688 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-05 19:17:41.415953 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-05 19:17:41.416523 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-05 19:17:41.417482 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-05 19:17:41.418463 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-05 19:17:41.420165 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-05 19:17:41.420188 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-05 19:17:41.420482 | orchestrator | 2025-06-05 19:17:41.421345 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-05 19:17:41.422533 | orchestrator | Thursday 05 June 2025 19:17:41 +0000 (0:00:01.139) 0:00:06.933 ********* 2025-06-05 19:17:42.587796 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:42.588899 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:42.589832 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:42.590291 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:42.590776 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:42.591816 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:42.592277 | orchestrator | 2025-06-05 19:17:42.592896 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-05 19:17:42.593239 | orchestrator | Thursday 05 June 2025 19:17:42 +0000 (0:00:01.175) 0:00:08.108 ********* 2025-06-05 19:17:43.711722 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-05 19:17:43.712186 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-05 19:17:43.712300 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-05 19:17:43.898198 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.898272 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.901668 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.901767 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.901795 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.901842 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-05 19:17:43.901861 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.901880 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.901899 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.901969 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.902106 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.902127 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-05 19:17:43.902398 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904139 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904188 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904201 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904212 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904610 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-05 19:17:43.904955 | orchestrator | 2025-06-05 19:17:43.906099 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-05 19:17:43.906122 | orchestrator | Thursday 05 June 2025 19:17:43 +0000 (0:00:01.312) 0:00:09.420 ********* 2025-06-05 19:17:44.486769 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:44.487110 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:44.488695 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:44.489740 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:44.490274 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:44.491109 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:44.492116 | orchestrator | 2025-06-05 19:17:44.494203 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-05 19:17:44.495025 | orchestrator | Thursday 05 June 2025 19:17:44 +0000 (0:00:00.588) 0:00:10.009 ********* 2025-06-05 19:17:44.576420 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:17:44.600725 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:17:44.645224 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:17:44.645856 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:44.647226 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:44.647283 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:44.648066 | orchestrator | 2025-06-05 19:17:44.648330 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-05 19:17:44.648854 | orchestrator | Thursday 05 June 2025 19:17:44 +0000 (0:00:00.159) 0:00:10.169 ********* 2025-06-05 19:17:45.352295 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-05 19:17:45.352470 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:45.353348 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:17:45.353747 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:45.354853 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 19:17:45.355569 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:45.356073 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-05 19:17:45.356772 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:45.357365 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 19:17:45.357858 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:45.358468 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 19:17:45.358924 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:45.359391 | orchestrator | 2025-06-05 19:17:45.359936 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-05 19:17:45.360283 | orchestrator | Thursday 05 June 2025 19:17:45 +0000 (0:00:00.705) 0:00:10.874 ********* 2025-06-05 19:17:45.393882 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:17:45.414333 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:17:45.455413 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:17:45.487723 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:45.487959 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:45.489880 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:45.490320 | orchestrator | 2025-06-05 19:17:45.491065 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-05 19:17:45.491708 | orchestrator | Thursday 05 June 2025 19:17:45 +0000 (0:00:00.136) 0:00:11.011 ********* 2025-06-05 19:17:45.550819 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:17:45.575403 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:17:45.597411 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:17:45.630078 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:45.630326 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:45.630869 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:45.631958 | orchestrator | 2025-06-05 19:17:45.632763 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-05 19:17:45.633239 | orchestrator | Thursday 05 June 2025 19:17:45 +0000 (0:00:00.143) 0:00:11.154 ********* 2025-06-05 19:17:45.677614 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:17:45.697015 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:17:45.740211 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:17:45.775203 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:45.776388 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:45.777182 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:45.778367 | orchestrator | 2025-06-05 19:17:45.779561 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-05 19:17:45.780244 | orchestrator | Thursday 05 June 2025 19:17:45 +0000 (0:00:00.144) 0:00:11.298 ********* 2025-06-05 19:17:46.408115 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:46.408495 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:46.409944 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:46.410833 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:46.412269 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:46.413459 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:46.414630 | orchestrator | 2025-06-05 19:17:46.415161 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-05 19:17:46.416292 | orchestrator | Thursday 05 June 2025 19:17:46 +0000 (0:00:00.632) 0:00:11.931 ********* 2025-06-05 19:17:46.497037 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:17:46.521318 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:17:46.606494 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:17:46.607005 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:46.608663 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:46.610421 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:46.611178 | orchestrator | 2025-06-05 19:17:46.612293 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:17:46.612918 | orchestrator | 2025-06-05 19:17:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:17:46.612947 | orchestrator | 2025-06-05 19:17:46 | INFO  | Please wait and do not abort execution. 2025-06-05 19:17:46.613977 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.615238 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.615763 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.616445 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.617676 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.618849 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:17:46.619750 | orchestrator | 2025-06-05 19:17:46.620869 | orchestrator | 2025-06-05 19:17:46.621701 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:17:46.622466 | orchestrator | Thursday 05 June 2025 19:17:46 +0000 (0:00:00.198) 0:00:12.129 ********* 2025-06-05 19:17:46.623828 | orchestrator | =============================================================================== 2025-06-05 19:17:46.624120 | orchestrator | Gathering Facts --------------------------------------------------------- 3.24s 2025-06-05 19:17:46.624717 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.31s 2025-06-05 19:17:46.625346 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2025-06-05 19:17:46.625983 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.14s 2025-06-05 19:17:46.626452 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2025-06-05 19:17:46.627213 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-06-05 19:17:46.627711 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2025-06-05 19:17:46.627996 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-06-05 19:17:46.628531 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-06-05 19:17:46.628918 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-05 19:17:46.629495 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-06-05 19:17:46.629746 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-05 19:17:46.630313 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-06-05 19:17:46.631069 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-06-05 19:17:46.631480 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-06-05 19:17:46.631919 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-05 19:17:46.632448 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-05 19:17:47.067203 | orchestrator | + osism apply --environment custom facts 2025-06-05 19:17:48.700898 | orchestrator | 2025-06-05 19:17:48 | INFO  | Trying to run play facts in environment custom 2025-06-05 19:17:48.705558 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:17:48.705619 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:17:48.705628 | orchestrator | Registering Redlock._release_script 2025-06-05 19:17:48.761765 | orchestrator | 2025-06-05 19:17:48 | INFO  | Task c7ed3af7-13f4-418b-8f27-f47bc9430671 (facts) was prepared for execution. 2025-06-05 19:17:48.762626 | orchestrator | 2025-06-05 19:17:48 | INFO  | It takes a moment until task c7ed3af7-13f4-418b-8f27-f47bc9430671 (facts) has been started and output is visible here. 2025-06-05 19:17:52.552972 | orchestrator | 2025-06-05 19:17:52.553496 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-05 19:17:52.555073 | orchestrator | 2025-06-05 19:17:52.556164 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-05 19:17:52.556444 | orchestrator | Thursday 05 June 2025 19:17:52 +0000 (0:00:00.084) 0:00:00.084 ********* 2025-06-05 19:17:53.914700 | orchestrator | ok: [testbed-manager] 2025-06-05 19:17:53.915628 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:53.916345 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:53.916484 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:53.917794 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:53.919254 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:53.920145 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:53.920928 | orchestrator | 2025-06-05 19:17:53.922118 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-05 19:17:53.922987 | orchestrator | Thursday 05 June 2025 19:17:53 +0000 (0:00:01.360) 0:00:01.444 ********* 2025-06-05 19:17:55.046084 | orchestrator | ok: [testbed-manager] 2025-06-05 19:17:55.046420 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:17:55.047830 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:17:55.049453 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:17:55.050389 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:55.051469 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:55.052540 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:55.053715 | orchestrator | 2025-06-05 19:17:55.054762 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-05 19:17:55.055037 | orchestrator | 2025-06-05 19:17:55.056196 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-05 19:17:55.056547 | orchestrator | Thursday 05 June 2025 19:17:55 +0000 (0:00:01.133) 0:00:02.577 ********* 2025-06-05 19:17:55.150004 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:55.151334 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:55.152244 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:55.152899 | orchestrator | 2025-06-05 19:17:55.153787 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-05 19:17:55.154849 | orchestrator | Thursday 05 June 2025 19:17:55 +0000 (0:00:00.105) 0:00:02.682 ********* 2025-06-05 19:17:55.345752 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:55.347213 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:55.348056 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:55.349164 | orchestrator | 2025-06-05 19:17:55.350280 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-05 19:17:55.351189 | orchestrator | Thursday 05 June 2025 19:17:55 +0000 (0:00:00.196) 0:00:02.878 ********* 2025-06-05 19:17:55.531788 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:55.531973 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:55.532284 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:55.532689 | orchestrator | 2025-06-05 19:17:55.533197 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-05 19:17:55.533637 | orchestrator | Thursday 05 June 2025 19:17:55 +0000 (0:00:00.186) 0:00:03.065 ********* 2025-06-05 19:17:55.672869 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:17:55.675968 | orchestrator | 2025-06-05 19:17:55.676274 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-05 19:17:55.678085 | orchestrator | Thursday 05 June 2025 19:17:55 +0000 (0:00:00.139) 0:00:03.204 ********* 2025-06-05 19:17:56.102167 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:56.102930 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:56.104031 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:56.104699 | orchestrator | 2025-06-05 19:17:56.105691 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-05 19:17:56.106098 | orchestrator | Thursday 05 June 2025 19:17:56 +0000 (0:00:00.429) 0:00:03.634 ********* 2025-06-05 19:17:56.192133 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:17:56.192224 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:17:56.192553 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:17:56.192968 | orchestrator | 2025-06-05 19:17:56.193430 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-05 19:17:56.193860 | orchestrator | Thursday 05 June 2025 19:17:56 +0000 (0:00:00.092) 0:00:03.726 ********* 2025-06-05 19:17:57.194275 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:57.196399 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:57.196431 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:57.196443 | orchestrator | 2025-06-05 19:17:57.196456 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-05 19:17:57.196518 | orchestrator | Thursday 05 June 2025 19:17:57 +0000 (0:00:00.999) 0:00:04.726 ********* 2025-06-05 19:17:57.665489 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:17:57.666339 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:17:57.667815 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:17:57.668685 | orchestrator | 2025-06-05 19:17:57.669443 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-05 19:17:57.670490 | orchestrator | Thursday 05 June 2025 19:17:57 +0000 (0:00:00.469) 0:00:05.196 ********* 2025-06-05 19:17:58.683631 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:17:58.683914 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:17:58.685426 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:17:58.685954 | orchestrator | 2025-06-05 19:17:58.686505 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-05 19:17:58.687133 | orchestrator | Thursday 05 June 2025 19:17:58 +0000 (0:00:01.017) 0:00:06.214 ********* 2025-06-05 19:18:12.445913 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:12.446139 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:12.446159 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:12.446174 | orchestrator | 2025-06-05 19:18:12.446257 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-05 19:18:12.446578 | orchestrator | Thursday 05 June 2025 19:18:12 +0000 (0:00:13.759) 0:00:19.973 ********* 2025-06-05 19:18:12.535122 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:12.535785 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:12.536753 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:12.537295 | orchestrator | 2025-06-05 19:18:12.537912 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-05 19:18:12.538558 | orchestrator | Thursday 05 June 2025 19:18:12 +0000 (0:00:00.094) 0:00:20.068 ********* 2025-06-05 19:18:19.801684 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:19.801798 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:19.803590 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:19.804770 | orchestrator | 2025-06-05 19:18:19.805297 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-05 19:18:19.806459 | orchestrator | Thursday 05 June 2025 19:18:19 +0000 (0:00:07.263) 0:00:27.332 ********* 2025-06-05 19:18:20.220374 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:20.220582 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:20.221779 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:20.222578 | orchestrator | 2025-06-05 19:18:20.223212 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-05 19:18:20.223956 | orchestrator | Thursday 05 June 2025 19:18:20 +0000 (0:00:00.421) 0:00:27.753 ********* 2025-06-05 19:18:23.559447 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-05 19:18:23.559643 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-05 19:18:23.560749 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-05 19:18:23.563806 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-05 19:18:23.564152 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-05 19:18:23.565020 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-05 19:18:23.565656 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-05 19:18:23.566222 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-05 19:18:23.566723 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-05 19:18:23.567345 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-05 19:18:23.568079 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-05 19:18:23.568328 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-05 19:18:23.568638 | orchestrator | 2025-06-05 19:18:23.569023 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-05 19:18:23.569689 | orchestrator | Thursday 05 June 2025 19:18:23 +0000 (0:00:03.337) 0:00:31.091 ********* 2025-06-05 19:18:24.659910 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:24.663644 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:24.663727 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:24.664345 | orchestrator | 2025-06-05 19:18:24.665748 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 19:18:24.666312 | orchestrator | 2025-06-05 19:18:24.667164 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:18:24.668096 | orchestrator | Thursday 05 June 2025 19:18:24 +0000 (0:00:01.100) 0:00:32.191 ********* 2025-06-05 19:18:28.537404 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:28.537624 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:28.538750 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:28.538978 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:28.539342 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:28.540031 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:28.540663 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:28.541369 | orchestrator | 2025-06-05 19:18:28.541885 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:18:28.542409 | orchestrator | 2025-06-05 19:18:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:18:28.542446 | orchestrator | 2025-06-05 19:18:28 | INFO  | Please wait and do not abort execution. 2025-06-05 19:18:28.542987 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:18:28.543370 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:18:28.543908 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:18:28.544254 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:18:28.544806 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:18:28.545013 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:18:28.545564 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:18:28.546117 | orchestrator | 2025-06-05 19:18:28.546319 | orchestrator | 2025-06-05 19:18:28.546772 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:18:28.547304 | orchestrator | Thursday 05 June 2025 19:18:28 +0000 (0:00:03.878) 0:00:36.070 ********* 2025-06-05 19:18:28.547558 | orchestrator | =============================================================================== 2025-06-05 19:18:28.548045 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.76s 2025-06-05 19:18:28.548575 | orchestrator | Install required packages (Debian) -------------------------------------- 7.26s 2025-06-05 19:18:28.548868 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.88s 2025-06-05 19:18:28.549745 | orchestrator | Copy fact files --------------------------------------------------------- 3.34s 2025-06-05 19:18:28.550431 | orchestrator | Create custom facts directory ------------------------------------------- 1.36s 2025-06-05 19:18:28.551049 | orchestrator | Copy fact file ---------------------------------------------------------- 1.13s 2025-06-05 19:18:28.551705 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.10s 2025-06-05 19:18:28.552024 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-06-05 19:18:28.552597 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-06-05 19:18:28.553151 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-06-05 19:18:28.553730 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-06-05 19:18:28.554355 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-06-05 19:18:28.556766 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-05 19:18:28.557443 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-06-05 19:18:28.557982 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-06-05 19:18:28.558159 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-06-05 19:18:28.559075 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-06-05 19:18:28.560081 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.09s 2025-06-05 19:18:28.971575 | orchestrator | + osism apply bootstrap 2025-06-05 19:18:30.702326 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:18:30.702475 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:18:30.702493 | orchestrator | Registering Redlock._release_script 2025-06-05 19:18:30.757653 | orchestrator | 2025-06-05 19:18:30 | INFO  | Task 6fb63d59-2be0-46a3-832a-daba1d0ab4a3 (bootstrap) was prepared for execution. 2025-06-05 19:18:30.757768 | orchestrator | 2025-06-05 19:18:30 | INFO  | It takes a moment until task 6fb63d59-2be0-46a3-832a-daba1d0ab4a3 (bootstrap) has been started and output is visible here. 2025-06-05 19:18:34.771585 | orchestrator | 2025-06-05 19:18:34.772537 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-05 19:18:34.773376 | orchestrator | 2025-06-05 19:18:34.775294 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-05 19:18:34.776259 | orchestrator | Thursday 05 June 2025 19:18:34 +0000 (0:00:00.162) 0:00:00.162 ********* 2025-06-05 19:18:34.847214 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:34.872635 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:34.894699 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:34.919569 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:34.997473 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:34.998452 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:34.999277 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:35.003001 | orchestrator | 2025-06-05 19:18:35.003361 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 19:18:35.004077 | orchestrator | 2025-06-05 19:18:35.004650 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:18:35.005466 | orchestrator | Thursday 05 June 2025 19:18:34 +0000 (0:00:00.228) 0:00:00.390 ********* 2025-06-05 19:18:39.484976 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:39.489140 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:39.489199 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:39.489213 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:39.489225 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:39.490273 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:39.491696 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:39.492352 | orchestrator | 2025-06-05 19:18:39.493526 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-05 19:18:39.493699 | orchestrator | 2025-06-05 19:18:39.494201 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:18:39.494707 | orchestrator | Thursday 05 June 2025 19:18:39 +0000 (0:00:04.487) 0:00:04.878 ********* 2025-06-05 19:18:39.556025 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-05 19:18:39.591321 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-05 19:18:39.591405 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-05 19:18:39.591619 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-05 19:18:39.591810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:18:39.650457 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:18:39.650740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-05 19:18:39.650776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-05 19:18:39.650949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:18:39.651779 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-05 19:18:39.652134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-05 19:18:39.652468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-05 19:18:39.652839 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-05 19:18:39.700198 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-05 19:18:39.700306 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-05 19:18:39.700398 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-05 19:18:39.700628 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-05 19:18:39.701000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-05 19:18:39.701235 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-05 19:18:39.701614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-05 19:18:39.948669 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-05 19:18:39.953762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-05 19:18:39.954199 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:39.955222 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:39.956547 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-05 19:18:39.957550 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-05 19:18:39.958196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-05 19:18:39.959105 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-05 19:18:39.959968 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-05 19:18:39.960795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-05 19:18:39.961451 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-05 19:18:39.962242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-05 19:18:39.962951 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-05 19:18:39.963604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-05 19:18:39.964210 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-05 19:18:39.964830 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-05 19:18:39.965375 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-05 19:18:39.966120 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-05 19:18:39.966621 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-05 19:18:39.967603 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-05 19:18:39.967952 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:39.968558 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-05 19:18:39.969139 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-05 19:18:39.969584 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:39.970149 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-05 19:18:39.970767 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:18:39.971337 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-05 19:18:39.972042 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-05 19:18:39.975837 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:39.977009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:18:39.977699 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-05 19:18:39.978622 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:18:39.979192 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:39.979918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-05 19:18:39.980920 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-05 19:18:39.981408 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:39.982103 | orchestrator | 2025-06-05 19:18:39.982559 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-05 19:18:39.983258 | orchestrator | 2025-06-05 19:18:39.983829 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-05 19:18:39.984411 | orchestrator | Thursday 05 June 2025 19:18:39 +0000 (0:00:00.463) 0:00:05.341 ********* 2025-06-05 19:18:41.214530 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:41.214657 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:41.215355 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:41.218101 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:41.218572 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:41.219581 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:41.220576 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:41.221770 | orchestrator | 2025-06-05 19:18:41.223201 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-05 19:18:41.224115 | orchestrator | Thursday 05 June 2025 19:18:41 +0000 (0:00:01.265) 0:00:06.607 ********* 2025-06-05 19:18:42.414731 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:42.414851 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:42.415179 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:42.418294 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:42.418329 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:42.419199 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:42.419665 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:42.420862 | orchestrator | 2025-06-05 19:18:42.421708 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-05 19:18:42.422440 | orchestrator | Thursday 05 June 2025 19:18:42 +0000 (0:00:01.198) 0:00:07.805 ********* 2025-06-05 19:18:42.688052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:42.688155 | orchestrator | 2025-06-05 19:18:42.688311 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-05 19:18:42.689539 | orchestrator | Thursday 05 June 2025 19:18:42 +0000 (0:00:00.274) 0:00:08.080 ********* 2025-06-05 19:18:44.739529 | orchestrator | changed: [testbed-manager] 2025-06-05 19:18:44.739590 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:44.739596 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:44.740589 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:44.742970 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:44.743932 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:44.744643 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:44.745314 | orchestrator | 2025-06-05 19:18:44.746402 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-05 19:18:44.747259 | orchestrator | Thursday 05 June 2025 19:18:44 +0000 (0:00:02.049) 0:00:10.130 ********* 2025-06-05 19:18:44.808325 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:44.965205 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:44.965292 | orchestrator | 2025-06-05 19:18:44.965767 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-05 19:18:44.966701 | orchestrator | Thursday 05 June 2025 19:18:44 +0000 (0:00:00.223) 0:00:10.354 ********* 2025-06-05 19:18:45.868853 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:45.869638 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:45.870506 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:45.871416 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:45.872398 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:45.873565 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:45.874061 | orchestrator | 2025-06-05 19:18:45.874926 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-05 19:18:45.875294 | orchestrator | Thursday 05 June 2025 19:18:45 +0000 (0:00:00.906) 0:00:11.261 ********* 2025-06-05 19:18:45.933604 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:46.424849 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:46.424942 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:46.425507 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:46.426097 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:46.426442 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:46.428419 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:46.429042 | orchestrator | 2025-06-05 19:18:46.429586 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-05 19:18:46.430388 | orchestrator | Thursday 05 June 2025 19:18:46 +0000 (0:00:00.557) 0:00:11.818 ********* 2025-06-05 19:18:46.524092 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:46.538633 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:46.788830 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:46.789950 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:46.791929 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:46.792668 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:46.793392 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:46.794378 | orchestrator | 2025-06-05 19:18:46.795029 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-05 19:18:46.795711 | orchestrator | Thursday 05 June 2025 19:18:46 +0000 (0:00:00.363) 0:00:12.182 ********* 2025-06-05 19:18:46.868835 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:46.885690 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:46.906852 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:46.963272 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:46.964288 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:46.965195 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:46.965781 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:46.966583 | orchestrator | 2025-06-05 19:18:46.967249 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-05 19:18:46.967835 | orchestrator | Thursday 05 June 2025 19:18:46 +0000 (0:00:00.175) 0:00:12.357 ********* 2025-06-05 19:18:47.206292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:47.207033 | orchestrator | 2025-06-05 19:18:47.207276 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-05 19:18:47.207854 | orchestrator | Thursday 05 June 2025 19:18:47 +0000 (0:00:00.242) 0:00:12.600 ********* 2025-06-05 19:18:47.438100 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:47.438211 | orchestrator | 2025-06-05 19:18:47.440963 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-05 19:18:47.441309 | orchestrator | Thursday 05 June 2025 19:18:47 +0000 (0:00:00.231) 0:00:12.831 ********* 2025-06-05 19:18:48.638364 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:48.639572 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:48.640921 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:48.641786 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:48.642438 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:48.643369 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:48.644259 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:48.644945 | orchestrator | 2025-06-05 19:18:48.645425 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-05 19:18:48.645892 | orchestrator | Thursday 05 June 2025 19:18:48 +0000 (0:00:01.196) 0:00:14.028 ********* 2025-06-05 19:18:48.703735 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:48.726775 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:48.748801 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:48.773245 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:48.817876 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:48.817944 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:48.817956 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:48.817967 | orchestrator | 2025-06-05 19:18:48.818456 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-05 19:18:48.818804 | orchestrator | Thursday 05 June 2025 19:18:48 +0000 (0:00:00.179) 0:00:14.208 ********* 2025-06-05 19:18:49.292784 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:49.293761 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:49.294455 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:49.295452 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:49.296928 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:49.296963 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:49.297166 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:49.297692 | orchestrator | 2025-06-05 19:18:49.298168 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-05 19:18:49.298660 | orchestrator | Thursday 05 June 2025 19:18:49 +0000 (0:00:00.477) 0:00:14.685 ********* 2025-06-05 19:18:49.364875 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:49.381451 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:49.402264 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:49.425742 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:49.478405 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:49.478677 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:49.479275 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:49.479738 | orchestrator | 2025-06-05 19:18:49.481335 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-05 19:18:49.482230 | orchestrator | Thursday 05 June 2025 19:18:49 +0000 (0:00:00.187) 0:00:14.872 ********* 2025-06-05 19:18:50.027989 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:50.029004 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:50.029507 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:50.029933 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:50.030428 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:50.030871 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:50.031573 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:50.032866 | orchestrator | 2025-06-05 19:18:50.033548 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-05 19:18:50.034579 | orchestrator | Thursday 05 June 2025 19:18:50 +0000 (0:00:00.546) 0:00:15.419 ********* 2025-06-05 19:18:51.022944 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:51.023160 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:51.024046 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:51.024796 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:51.025646 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:51.026485 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:51.026947 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:51.027371 | orchestrator | 2025-06-05 19:18:51.028264 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-05 19:18:51.028869 | orchestrator | Thursday 05 June 2025 19:18:51 +0000 (0:00:00.995) 0:00:16.414 ********* 2025-06-05 19:18:52.030885 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:52.031046 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:52.031822 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:52.032430 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:52.033286 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:52.033577 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:52.034626 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:52.035783 | orchestrator | 2025-06-05 19:18:52.036259 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-05 19:18:52.036908 | orchestrator | Thursday 05 June 2025 19:18:52 +0000 (0:00:01.006) 0:00:17.421 ********* 2025-06-05 19:18:52.404112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:52.404314 | orchestrator | 2025-06-05 19:18:52.405591 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-05 19:18:52.406177 | orchestrator | Thursday 05 June 2025 19:18:52 +0000 (0:00:00.375) 0:00:17.796 ********* 2025-06-05 19:18:52.479088 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:53.695908 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:53.696014 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:53.696365 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:53.696857 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:18:53.697804 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:18:53.699035 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:18:53.699615 | orchestrator | 2025-06-05 19:18:53.700370 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-05 19:18:53.705132 | orchestrator | Thursday 05 June 2025 19:18:53 +0000 (0:00:01.287) 0:00:19.084 ********* 2025-06-05 19:18:53.770940 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:53.796346 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:53.829260 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:53.852836 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:53.908685 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:53.909258 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:53.912031 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:53.912067 | orchestrator | 2025-06-05 19:18:53.912080 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-05 19:18:53.912092 | orchestrator | Thursday 05 June 2025 19:18:53 +0000 (0:00:00.217) 0:00:19.302 ********* 2025-06-05 19:18:53.984974 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:54.004367 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:54.026746 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:54.051162 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:54.122745 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:54.122854 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:54.122924 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:54.123465 | orchestrator | 2025-06-05 19:18:54.123633 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-05 19:18:54.123909 | orchestrator | Thursday 05 June 2025 19:18:54 +0000 (0:00:00.214) 0:00:19.516 ********* 2025-06-05 19:18:54.215990 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:54.243876 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:54.276649 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:54.303377 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:54.372831 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:54.373611 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:54.374577 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:54.375556 | orchestrator | 2025-06-05 19:18:54.375903 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-05 19:18:54.376971 | orchestrator | Thursday 05 June 2025 19:18:54 +0000 (0:00:00.249) 0:00:19.766 ********* 2025-06-05 19:18:54.657077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:18:54.657972 | orchestrator | 2025-06-05 19:18:54.658831 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-05 19:18:54.659683 | orchestrator | Thursday 05 June 2025 19:18:54 +0000 (0:00:00.284) 0:00:20.050 ********* 2025-06-05 19:18:55.177074 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:55.178003 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:55.179722 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:55.179961 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:55.180969 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:55.181226 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:55.182328 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:55.182643 | orchestrator | 2025-06-05 19:18:55.182716 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-05 19:18:55.183977 | orchestrator | Thursday 05 June 2025 19:18:55 +0000 (0:00:00.517) 0:00:20.568 ********* 2025-06-05 19:18:55.270281 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:18:55.294711 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:18:55.317362 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:18:55.383510 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:18:55.383890 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:18:55.384942 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:18:55.386104 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:18:55.386645 | orchestrator | 2025-06-05 19:18:55.387716 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-05 19:18:55.388433 | orchestrator | Thursday 05 June 2025 19:18:55 +0000 (0:00:00.208) 0:00:20.776 ********* 2025-06-05 19:18:56.410282 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:56.410461 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:56.411324 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:56.412332 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:56.413202 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:56.414263 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:56.415221 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:56.415849 | orchestrator | 2025-06-05 19:18:56.416599 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-05 19:18:56.417247 | orchestrator | Thursday 05 June 2025 19:18:56 +0000 (0:00:01.024) 0:00:21.800 ********* 2025-06-05 19:18:56.941395 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:56.941618 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:56.941894 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:56.942778 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:56.943212 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:18:56.943772 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:18:56.944256 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:18:56.944761 | orchestrator | 2025-06-05 19:18:56.945174 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-05 19:18:56.946528 | orchestrator | Thursday 05 June 2025 19:18:56 +0000 (0:00:00.533) 0:00:22.334 ********* 2025-06-05 19:18:58.056898 | orchestrator | ok: [testbed-manager] 2025-06-05 19:18:58.057351 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:18:58.058381 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:18:58.059301 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:18:58.060241 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:18:58.060905 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:18:58.061494 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:18:58.062105 | orchestrator | 2025-06-05 19:18:58.062587 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-05 19:18:58.063985 | orchestrator | Thursday 05 June 2025 19:18:58 +0000 (0:00:01.114) 0:00:23.448 ********* 2025-06-05 19:19:11.487928 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:11.488081 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:11.488098 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:11.488110 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:11.488518 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:11.489235 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:11.489948 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:11.490670 | orchestrator | 2025-06-05 19:19:11.491260 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-05 19:19:11.491874 | orchestrator | Thursday 05 June 2025 19:19:11 +0000 (0:00:13.426) 0:00:36.874 ********* 2025-06-05 19:19:11.555348 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:11.579234 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:11.605869 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:11.626483 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:11.676925 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:11.680994 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:11.681047 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:11.681060 | orchestrator | 2025-06-05 19:19:11.681073 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-05 19:19:11.681086 | orchestrator | Thursday 05 June 2025 19:19:11 +0000 (0:00:00.196) 0:00:37.070 ********* 2025-06-05 19:19:11.748325 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:11.779618 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:11.799523 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:11.828097 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:11.888414 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:11.888558 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:11.888573 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:11.889199 | orchestrator | 2025-06-05 19:19:11.889888 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-05 19:19:11.890672 | orchestrator | Thursday 05 June 2025 19:19:11 +0000 (0:00:00.209) 0:00:37.280 ********* 2025-06-05 19:19:11.960011 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:11.985867 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:12.009100 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:12.032412 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:12.100008 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:12.100339 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:12.100881 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:12.101899 | orchestrator | 2025-06-05 19:19:12.102835 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-05 19:19:12.103151 | orchestrator | Thursday 05 June 2025 19:19:12 +0000 (0:00:00.212) 0:00:37.492 ********* 2025-06-05 19:19:12.379705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:19:12.379913 | orchestrator | 2025-06-05 19:19:12.380972 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-05 19:19:12.381983 | orchestrator | Thursday 05 June 2025 19:19:12 +0000 (0:00:00.279) 0:00:37.772 ********* 2025-06-05 19:19:13.976107 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:13.977395 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:13.979023 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:13.979656 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:13.980555 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:13.981026 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:13.981963 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:13.982168 | orchestrator | 2025-06-05 19:19:13.982778 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-05 19:19:13.983276 | orchestrator | Thursday 05 June 2025 19:19:13 +0000 (0:00:01.592) 0:00:39.365 ********* 2025-06-05 19:19:15.018924 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:15.019366 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:15.021830 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:19:15.021850 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:15.022126 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:19:15.023680 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:19:15.023693 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:15.023864 | orchestrator | 2025-06-05 19:19:15.024176 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-05 19:19:15.024845 | orchestrator | Thursday 05 June 2025 19:19:15 +0000 (0:00:01.044) 0:00:40.409 ********* 2025-06-05 19:19:15.867087 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:15.868165 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:15.868847 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:15.870163 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:15.870617 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:15.871740 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:15.873030 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:15.874622 | orchestrator | 2025-06-05 19:19:15.875152 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-05 19:19:15.876364 | orchestrator | Thursday 05 June 2025 19:19:15 +0000 (0:00:00.848) 0:00:41.258 ********* 2025-06-05 19:19:16.133161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:19:16.134591 | orchestrator | 2025-06-05 19:19:16.135740 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-05 19:19:16.136677 | orchestrator | Thursday 05 June 2025 19:19:16 +0000 (0:00:00.266) 0:00:41.524 ********* 2025-06-05 19:19:17.100948 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:17.101049 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:19:17.101083 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:19:17.101503 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:17.102407 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:17.102940 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:19:17.102950 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:17.103279 | orchestrator | 2025-06-05 19:19:17.103580 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-05 19:19:17.104044 | orchestrator | Thursday 05 June 2025 19:19:17 +0000 (0:00:00.968) 0:00:42.493 ********* 2025-06-05 19:19:17.193227 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:19:17.219788 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:19:17.241204 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:19:17.392129 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:19:17.392965 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:19:17.393861 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:19:17.394549 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:19:17.395583 | orchestrator | 2025-06-05 19:19:17.396462 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-05 19:19:17.397140 | orchestrator | Thursday 05 June 2025 19:19:17 +0000 (0:00:00.292) 0:00:42.785 ********* 2025-06-05 19:19:28.646203 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:19:28.646781 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:19:28.646898 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:28.647957 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:28.649665 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:28.650303 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:19:28.651454 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:28.651897 | orchestrator | 2025-06-05 19:19:28.652612 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-05 19:19:28.653292 | orchestrator | Thursday 05 June 2025 19:19:28 +0000 (0:00:11.251) 0:00:54.036 ********* 2025-06-05 19:19:29.650640 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:29.650741 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:29.650992 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:29.652004 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:29.652723 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:29.653012 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:29.654610 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:29.656402 | orchestrator | 2025-06-05 19:19:29.656669 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-05 19:19:29.656709 | orchestrator | Thursday 05 June 2025 19:19:29 +0000 (0:00:01.007) 0:00:55.043 ********* 2025-06-05 19:19:30.533121 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:30.534491 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:30.535555 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:30.536351 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:30.537506 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:30.537894 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:30.538950 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:30.539499 | orchestrator | 2025-06-05 19:19:30.540636 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-05 19:19:30.541162 | orchestrator | Thursday 05 June 2025 19:19:30 +0000 (0:00:00.881) 0:00:55.925 ********* 2025-06-05 19:19:30.605882 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:30.636934 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:30.657149 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:30.685618 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:30.753052 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:30.753651 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:30.754768 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:30.755778 | orchestrator | 2025-06-05 19:19:30.756262 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-05 19:19:30.757747 | orchestrator | Thursday 05 June 2025 19:19:30 +0000 (0:00:00.220) 0:00:56.146 ********* 2025-06-05 19:19:30.827674 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:30.852044 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:30.876274 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:30.902509 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:30.954930 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:30.955470 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:30.956266 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:30.957058 | orchestrator | 2025-06-05 19:19:30.957910 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-05 19:19:30.958758 | orchestrator | Thursday 05 June 2025 19:19:30 +0000 (0:00:00.202) 0:00:56.348 ********* 2025-06-05 19:19:31.225916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:19:31.226618 | orchestrator | 2025-06-05 19:19:31.227892 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-05 19:19:31.228749 | orchestrator | Thursday 05 June 2025 19:19:31 +0000 (0:00:00.270) 0:00:56.619 ********* 2025-06-05 19:19:32.821772 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:32.822818 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:32.824053 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:32.824486 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:32.824617 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:32.826075 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:32.826575 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:32.827008 | orchestrator | 2025-06-05 19:19:32.827749 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-05 19:19:32.828156 | orchestrator | Thursday 05 June 2025 19:19:32 +0000 (0:00:01.593) 0:00:58.212 ********* 2025-06-05 19:19:33.365188 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:33.365644 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:33.366599 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:33.367844 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:19:33.368586 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:33.369319 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:19:33.370084 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:19:33.370764 | orchestrator | 2025-06-05 19:19:33.371234 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-05 19:19:33.371721 | orchestrator | Thursday 05 June 2025 19:19:33 +0000 (0:00:00.545) 0:00:58.757 ********* 2025-06-05 19:19:33.443657 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:33.473681 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:33.498721 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:33.527274 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:33.583218 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:33.583378 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:33.585167 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:33.586092 | orchestrator | 2025-06-05 19:19:33.587345 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-05 19:19:33.588066 | orchestrator | Thursday 05 June 2025 19:19:33 +0000 (0:00:00.217) 0:00:58.975 ********* 2025-06-05 19:19:34.723299 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:34.723548 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:34.724677 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:34.725775 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:34.726638 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:34.727308 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:34.728215 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:34.728709 | orchestrator | 2025-06-05 19:19:34.729280 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-05 19:19:34.729793 | orchestrator | Thursday 05 June 2025 19:19:34 +0000 (0:00:01.138) 0:01:00.114 ********* 2025-06-05 19:19:36.305748 | orchestrator | changed: [testbed-manager] 2025-06-05 19:19:36.305856 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:19:36.307828 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:19:36.309581 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:19:36.310603 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:19:36.311771 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:19:36.312611 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:19:36.313174 | orchestrator | 2025-06-05 19:19:36.314744 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-05 19:19:36.315194 | orchestrator | Thursday 05 June 2025 19:19:36 +0000 (0:00:01.582) 0:01:01.696 ********* 2025-06-05 19:19:38.445237 | orchestrator | ok: [testbed-manager] 2025-06-05 19:19:38.446120 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:19:38.447508 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:19:38.449162 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:19:38.450631 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:19:38.451510 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:19:38.452258 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:19:38.453073 | orchestrator | 2025-06-05 19:19:38.453746 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-05 19:19:38.454351 | orchestrator | Thursday 05 June 2025 19:19:38 +0000 (0:00:02.138) 0:01:03.835 ********* 2025-06-05 19:20:13.318633 | orchestrator | ok: [testbed-manager] 2025-06-05 19:20:13.318749 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:20:13.320340 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:20:13.320403 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:20:13.320959 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:20:13.321865 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:20:13.323518 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:20:13.324189 | orchestrator | 2025-06-05 19:20:13.324690 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-05 19:20:13.325315 | orchestrator | Thursday 05 June 2025 19:20:13 +0000 (0:00:34.873) 0:01:38.708 ********* 2025-06-05 19:21:27.536010 | orchestrator | changed: [testbed-manager] 2025-06-05 19:21:27.536128 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:21:27.536143 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:21:27.536154 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:21:27.536166 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:21:27.536177 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:21:27.536440 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:21:27.537195 | orchestrator | 2025-06-05 19:21:27.537978 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-05 19:21:27.539488 | orchestrator | Thursday 05 June 2025 19:21:27 +0000 (0:01:14.213) 0:02:52.922 ********* 2025-06-05 19:21:29.133623 | orchestrator | ok: [testbed-manager] 2025-06-05 19:21:29.133733 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:29.134990 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:29.135520 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:29.135817 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:29.136810 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:29.137498 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:29.138178 | orchestrator | 2025-06-05 19:21:29.139116 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-05 19:21:29.139532 | orchestrator | Thursday 05 June 2025 19:21:29 +0000 (0:00:01.602) 0:02:54.524 ********* 2025-06-05 19:21:40.857825 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:40.858124 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:40.858158 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:40.858930 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:40.859853 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:40.860646 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:40.861185 | orchestrator | changed: [testbed-manager] 2025-06-05 19:21:40.861953 | orchestrator | 2025-06-05 19:21:40.863056 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-05 19:21:40.863769 | orchestrator | Thursday 05 June 2025 19:21:40 +0000 (0:00:11.722) 0:03:06.247 ********* 2025-06-05 19:21:41.230799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-05 19:21:41.231606 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-05 19:21:41.231976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-05 19:21:41.232930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-05 19:21:41.233634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-05 19:21:41.236358 | orchestrator | 2025-06-05 19:21:41.236903 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-05 19:21:41.237938 | orchestrator | Thursday 05 June 2025 19:21:41 +0000 (0:00:00.376) 0:03:06.624 ********* 2025-06-05 19:21:41.294212 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-05 19:21:41.322337 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:41.323021 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-05 19:21:41.355708 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:21:41.355990 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-05 19:21:41.356378 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-05 19:21:41.382756 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:21:41.410860 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:21:41.932279 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:21:41.933470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:21:41.934336 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:21:41.934592 | orchestrator | 2025-06-05 19:21:41.934790 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-05 19:21:41.935044 | orchestrator | Thursday 05 June 2025 19:21:41 +0000 (0:00:00.699) 0:03:07.323 ********* 2025-06-05 19:21:41.995443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-05 19:21:41.995615 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-05 19:21:41.996644 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-05 19:21:41.997352 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-05 19:21:41.997919 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-05 19:21:41.998697 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-05 19:21:41.999316 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-05 19:21:42.003198 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-05 19:21:42.003220 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-05 19:21:42.033368 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-05 19:21:42.039250 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-05 19:21:42.041560 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-05 19:21:42.042539 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-05 19:21:42.043499 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-05 19:21:42.044512 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-05 19:21:42.045686 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-05 19:21:42.046749 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-05 19:21:42.047720 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-05 19:21:42.049125 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-05 19:21:42.049846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-05 19:21:42.069820 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-05 19:21:42.070767 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:42.071910 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-05 19:21:42.072860 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-05 19:21:42.075048 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-05 19:21:42.075412 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-05 19:21:42.075704 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-05 19:21:42.075912 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-05 19:21:42.076500 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-05 19:21:42.076871 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-05 19:21:42.077928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-05 19:21:42.078074 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-05 19:21:42.078518 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-05 19:21:42.079020 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-05 19:21:42.109511 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-05 19:21:42.110311 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:21:42.110544 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-05 19:21:42.114138 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-05 19:21:42.114180 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-05 19:21:42.114200 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-05 19:21:42.114212 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-05 19:21:42.114223 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-05 19:21:42.131011 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:21:47.551580 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:21:47.551740 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-05 19:21:47.552296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-05 19:21:47.553281 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-05 19:21:47.554531 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-05 19:21:47.554604 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-05 19:21:47.554619 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-05 19:21:47.554945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-05 19:21:47.555362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-05 19:21:47.556677 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-05 19:21:47.557053 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-05 19:21:47.557331 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-05 19:21:47.557753 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-05 19:21:47.558162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-05 19:21:47.558767 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-05 19:21:47.558910 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-05 19:21:47.559858 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-05 19:21:47.559882 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-05 19:21:47.560007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-05 19:21:47.560771 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-05 19:21:47.560810 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-05 19:21:47.561152 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-05 19:21:47.561482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-05 19:21:47.561945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-05 19:21:47.562492 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-05 19:21:47.562590 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-05 19:21:47.562846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-05 19:21:47.563896 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-05 19:21:47.563971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-05 19:21:47.564084 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-05 19:21:47.564102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-05 19:21:47.565061 | orchestrator | 2025-06-05 19:21:47.566266 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-05 19:21:47.566291 | orchestrator | Thursday 05 June 2025 19:21:47 +0000 (0:00:05.618) 0:03:12.942 ********* 2025-06-05 19:21:48.203317 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.203544 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.203567 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.207984 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.208041 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.208060 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.208078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-05 19:21:48.208168 | orchestrator | 2025-06-05 19:21:48.208810 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-05 19:21:48.209408 | orchestrator | Thursday 05 June 2025 19:21:48 +0000 (0:00:00.652) 0:03:13.595 ********* 2025-06-05 19:21:48.254790 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-05 19:21:48.281402 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:48.350853 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-05 19:21:48.703383 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-05 19:21:48.704078 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:21:48.705048 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:21:48.705824 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-05 19:21:48.706754 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:21:48.707939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-05 19:21:48.708765 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-05 19:21:48.709349 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-05 19:21:48.709912 | orchestrator | 2025-06-05 19:21:48.710425 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-05 19:21:48.711026 | orchestrator | Thursday 05 June 2025 19:21:48 +0000 (0:00:00.499) 0:03:14.094 ********* 2025-06-05 19:21:48.762352 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-05 19:21:48.787057 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:48.869566 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-05 19:21:49.272916 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-05 19:21:49.273026 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:21:49.273042 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:21:49.274191 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-05 19:21:49.274833 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:21:49.275939 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-05 19:21:49.276744 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-05 19:21:49.277682 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-05 19:21:49.278600 | orchestrator | 2025-06-05 19:21:49.279361 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-05 19:21:49.279989 | orchestrator | Thursday 05 June 2025 19:21:49 +0000 (0:00:00.565) 0:03:14.660 ********* 2025-06-05 19:21:49.320326 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:49.346449 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:21:49.399785 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:21:49.423086 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:21:49.555014 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:21:49.556242 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:21:49.557724 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:21:49.557752 | orchestrator | 2025-06-05 19:21:49.558604 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-05 19:21:49.561849 | orchestrator | Thursday 05 June 2025 19:21:49 +0000 (0:00:00.286) 0:03:14.947 ********* 2025-06-05 19:21:55.224928 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:55.225622 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:55.228044 | orchestrator | ok: [testbed-manager] 2025-06-05 19:21:55.228091 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:55.228348 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:55.229089 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:55.229694 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:55.230453 | orchestrator | 2025-06-05 19:21:55.231339 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-05 19:21:55.231861 | orchestrator | Thursday 05 June 2025 19:21:55 +0000 (0:00:05.669) 0:03:20.617 ********* 2025-06-05 19:21:55.299124 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-05 19:21:55.341411 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:21:55.341728 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-05 19:21:55.342866 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-05 19:21:55.373105 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:21:55.413888 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-05 19:21:55.413982 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:21:55.414161 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-05 19:21:55.458484 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:21:55.458602 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-05 19:21:55.527697 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:21:55.528997 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:21:55.530864 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-05 19:21:55.531105 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:21:55.531884 | orchestrator | 2025-06-05 19:21:55.532502 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-05 19:21:55.533111 | orchestrator | Thursday 05 June 2025 19:21:55 +0000 (0:00:00.304) 0:03:20.921 ********* 2025-06-05 19:21:56.530463 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-05 19:21:56.531200 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-05 19:21:56.531889 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-05 19:21:56.532677 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-05 19:21:56.532945 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-05 19:21:56.533357 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-05 19:21:56.534330 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-05 19:21:56.534560 | orchestrator | 2025-06-05 19:21:56.534961 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-05 19:21:56.535422 | orchestrator | Thursday 05 June 2025 19:21:56 +0000 (0:00:01.000) 0:03:21.922 ********* 2025-06-05 19:21:57.032475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:21:57.032588 | orchestrator | 2025-06-05 19:21:57.032655 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-05 19:21:57.033176 | orchestrator | Thursday 05 June 2025 19:21:57 +0000 (0:00:00.498) 0:03:22.420 ********* 2025-06-05 19:21:58.120024 | orchestrator | ok: [testbed-manager] 2025-06-05 19:21:58.121409 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:58.122818 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:58.122865 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:58.124173 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:58.125099 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:58.126467 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:58.126946 | orchestrator | 2025-06-05 19:21:58.128070 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-05 19:21:58.128478 | orchestrator | Thursday 05 June 2025 19:21:58 +0000 (0:00:01.090) 0:03:23.510 ********* 2025-06-05 19:21:58.719813 | orchestrator | ok: [testbed-manager] 2025-06-05 19:21:58.719920 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:58.720001 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:58.721019 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:58.721706 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:58.722700 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:58.723703 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:58.724168 | orchestrator | 2025-06-05 19:21:58.724699 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-05 19:21:58.725713 | orchestrator | Thursday 05 June 2025 19:21:58 +0000 (0:00:00.600) 0:03:24.111 ********* 2025-06-05 19:21:59.343808 | orchestrator | changed: [testbed-manager] 2025-06-05 19:21:59.346552 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:21:59.347854 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:21:59.348974 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:21:59.350173 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:21:59.351240 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:21:59.352282 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:21:59.352947 | orchestrator | 2025-06-05 19:21:59.353553 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-05 19:21:59.354012 | orchestrator | Thursday 05 June 2025 19:21:59 +0000 (0:00:00.623) 0:03:24.735 ********* 2025-06-05 19:21:59.906416 | orchestrator | ok: [testbed-manager] 2025-06-05 19:21:59.906734 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:21:59.909693 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:21:59.909747 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:21:59.910298 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:21:59.911547 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:21:59.912443 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:21:59.913389 | orchestrator | 2025-06-05 19:21:59.914144 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-05 19:21:59.915503 | orchestrator | Thursday 05 June 2025 19:21:59 +0000 (0:00:00.564) 0:03:25.299 ********* 2025-06-05 19:22:00.969010 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150011.4438097, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.969685 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150072.4399986, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.970089 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150074.6236458, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.970774 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150072.0512764, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.971763 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150089.4137, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.973311 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150076.1312416, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.974741 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1749150078.7820268, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.975346 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749150033.6960783, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.976169 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149968.7429125, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.976819 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149970.7421489, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.977559 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149983.7210069, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.979114 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149974.4852026, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.981065 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149977.8762326, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.981591 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1749149972.760647, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:22:00.982086 | orchestrator | 2025-06-05 19:22:00.982946 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-05 19:22:00.983102 | orchestrator | Thursday 05 June 2025 19:22:00 +0000 (0:00:01.062) 0:03:26.361 ********* 2025-06-05 19:22:02.054645 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:02.055929 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:02.056536 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:02.059115 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:02.059431 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:02.060317 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:02.060485 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:02.060938 | orchestrator | 2025-06-05 19:22:02.061401 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-05 19:22:02.061926 | orchestrator | Thursday 05 June 2025 19:22:02 +0000 (0:00:01.085) 0:03:27.447 ********* 2025-06-05 19:22:03.185499 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:03.190271 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:03.190767 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:03.190976 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:03.191504 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:03.192462 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:03.193418 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:03.193715 | orchestrator | 2025-06-05 19:22:03.194410 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-05 19:22:03.195094 | orchestrator | Thursday 05 June 2025 19:22:03 +0000 (0:00:01.129) 0:03:28.576 ********* 2025-06-05 19:22:04.281887 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:04.282707 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:04.284025 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:04.284425 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:04.285257 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:04.286108 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:04.287025 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:04.287670 | orchestrator | 2025-06-05 19:22:04.288720 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-05 19:22:04.288857 | orchestrator | Thursday 05 June 2025 19:22:04 +0000 (0:00:01.096) 0:03:29.673 ********* 2025-06-05 19:22:04.389665 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:22:04.439688 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:22:04.475417 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:22:04.508627 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:22:04.562909 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:22:04.563565 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:22:04.564637 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:22:04.565885 | orchestrator | 2025-06-05 19:22:04.566671 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-05 19:22:04.567285 | orchestrator | Thursday 05 June 2025 19:22:04 +0000 (0:00:00.282) 0:03:29.955 ********* 2025-06-05 19:22:05.255470 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:05.256506 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:05.259965 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:05.260015 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:05.260028 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:05.260041 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:05.260351 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:05.261114 | orchestrator | 2025-06-05 19:22:05.261868 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-05 19:22:05.262451 | orchestrator | Thursday 05 June 2025 19:22:05 +0000 (0:00:00.690) 0:03:30.646 ********* 2025-06-05 19:22:05.634964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:22:05.636625 | orchestrator | 2025-06-05 19:22:05.638670 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-05 19:22:05.638699 | orchestrator | Thursday 05 June 2025 19:22:05 +0000 (0:00:00.381) 0:03:31.027 ********* 2025-06-05 19:22:13.208339 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:13.210997 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:13.211041 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:13.213182 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:13.214390 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:13.215494 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:13.216350 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:13.217150 | orchestrator | 2025-06-05 19:22:13.218006 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-05 19:22:13.218876 | orchestrator | Thursday 05 June 2025 19:22:13 +0000 (0:00:07.571) 0:03:38.598 ********* 2025-06-05 19:22:14.391370 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:14.391535 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:14.393238 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:14.398946 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:14.399296 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:14.402737 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:14.408988 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:14.412151 | orchestrator | 2025-06-05 19:22:14.412361 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-05 19:22:14.412860 | orchestrator | Thursday 05 June 2025 19:22:14 +0000 (0:00:01.183) 0:03:39.782 ********* 2025-06-05 19:22:15.465848 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:15.466076 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:15.466991 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:15.467576 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:15.468841 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:15.468867 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:15.469434 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:15.469969 | orchestrator | 2025-06-05 19:22:15.470532 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-05 19:22:15.470816 | orchestrator | Thursday 05 June 2025 19:22:15 +0000 (0:00:01.072) 0:03:40.854 ********* 2025-06-05 19:22:15.970323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:22:15.970435 | orchestrator | 2025-06-05 19:22:15.970670 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-05 19:22:15.971998 | orchestrator | Thursday 05 June 2025 19:22:15 +0000 (0:00:00.507) 0:03:41.362 ********* 2025-06-05 19:22:24.307325 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:24.307449 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:24.307465 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:24.308584 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:24.311102 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:24.312234 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:24.313480 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:24.314793 | orchestrator | 2025-06-05 19:22:24.315991 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-05 19:22:24.316605 | orchestrator | Thursday 05 June 2025 19:22:24 +0000 (0:00:08.335) 0:03:49.697 ********* 2025-06-05 19:22:25.020302 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:25.020414 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:25.020864 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:25.021998 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:25.022079 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:25.022613 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:25.023469 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:25.024241 | orchestrator | 2025-06-05 19:22:25.024719 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-05 19:22:25.025662 | orchestrator | Thursday 05 June 2025 19:22:25 +0000 (0:00:00.714) 0:03:50.411 ********* 2025-06-05 19:22:26.096729 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:26.097173 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:26.100725 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:26.102245 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:26.102283 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:26.102519 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:26.103078 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:26.103527 | orchestrator | 2025-06-05 19:22:26.103902 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-05 19:22:26.104362 | orchestrator | Thursday 05 June 2025 19:22:26 +0000 (0:00:01.077) 0:03:51.489 ********* 2025-06-05 19:22:27.078557 | orchestrator | changed: [testbed-manager] 2025-06-05 19:22:27.079556 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:22:27.080312 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:22:27.081306 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:22:27.082268 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:22:27.083172 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:22:27.083888 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:22:27.084672 | orchestrator | 2025-06-05 19:22:27.085567 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-05 19:22:27.086984 | orchestrator | Thursday 05 June 2025 19:22:27 +0000 (0:00:00.980) 0:03:52.470 ********* 2025-06-05 19:22:27.180315 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:27.217250 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:27.250097 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:27.280998 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:27.346503 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:27.346708 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:27.348414 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:27.348439 | orchestrator | 2025-06-05 19:22:27.348452 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-05 19:22:27.348465 | orchestrator | Thursday 05 June 2025 19:22:27 +0000 (0:00:00.270) 0:03:52.741 ********* 2025-06-05 19:22:27.462612 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:27.495552 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:27.534079 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:27.570640 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:27.641684 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:27.642485 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:27.643575 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:27.644541 | orchestrator | 2025-06-05 19:22:27.646841 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-05 19:22:27.646886 | orchestrator | Thursday 05 June 2025 19:22:27 +0000 (0:00:00.293) 0:03:53.034 ********* 2025-06-05 19:22:27.745669 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:27.777173 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:27.814797 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:27.847851 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:27.917238 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:27.917410 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:27.917909 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:27.918958 | orchestrator | 2025-06-05 19:22:27.919595 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-05 19:22:27.920078 | orchestrator | Thursday 05 June 2025 19:22:27 +0000 (0:00:00.277) 0:03:53.311 ********* 2025-06-05 19:22:33.607980 | orchestrator | ok: [testbed-manager] 2025-06-05 19:22:33.608098 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:22:33.608509 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:22:33.609649 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:22:33.610927 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:22:33.611457 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:22:33.612316 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:22:33.613225 | orchestrator | 2025-06-05 19:22:33.614111 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-05 19:22:33.614817 | orchestrator | Thursday 05 June 2025 19:22:33 +0000 (0:00:05.688) 0:03:59.000 ********* 2025-06-05 19:22:33.963896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:22:33.964002 | orchestrator | 2025-06-05 19:22:33.964909 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-05 19:22:33.966312 | orchestrator | Thursday 05 June 2025 19:22:33 +0000 (0:00:00.353) 0:03:59.354 ********* 2025-06-05 19:22:34.057874 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.057979 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-05 19:22:34.057995 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.060631 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-05 19:22:34.091940 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:22:34.140089 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:22:34.140236 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.140255 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-05 19:22:34.193466 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.194116 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:22:34.194991 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-05 19:22:34.196140 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.248985 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-05 19:22:34.249884 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:22:34.251740 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.251776 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-05 19:22:34.324341 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:22:34.324866 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:22:34.325887 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-05 19:22:34.327338 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-05 19:22:34.328279 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:22:34.329629 | orchestrator | 2025-06-05 19:22:34.330279 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-05 19:22:34.331818 | orchestrator | Thursday 05 June 2025 19:22:34 +0000 (0:00:00.362) 0:03:59.716 ********* 2025-06-05 19:22:34.699766 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:22:34.700320 | orchestrator | 2025-06-05 19:22:34.701120 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-05 19:22:34.704607 | orchestrator | Thursday 05 June 2025 19:22:34 +0000 (0:00:00.375) 0:04:00.092 ********* 2025-06-05 19:22:34.787939 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-05 19:22:34.788040 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-05 19:22:34.816573 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:22:34.864149 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:22:34.864648 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-05 19:22:34.868230 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-05 19:22:34.901863 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:22:34.902234 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-05 19:22:34.937007 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:22:34.996090 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-05 19:22:34.996323 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:22:34.996874 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:22:34.996899 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-05 19:22:34.997233 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:22:34.998798 | orchestrator | 2025-06-05 19:22:34.998821 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-05 19:22:34.999034 | orchestrator | Thursday 05 June 2025 19:22:34 +0000 (0:00:00.297) 0:04:00.390 ********* 2025-06-05 19:22:35.491253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:22:35.491469 | orchestrator | 2025-06-05 19:22:35.495577 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-05 19:22:35.495611 | orchestrator | Thursday 05 June 2025 19:22:35 +0000 (0:00:00.493) 0:04:00.883 ********* 2025-06-05 19:23:09.221893 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:09.222236 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:09.222264 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:09.222276 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:09.222288 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:09.222299 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:09.222310 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:09.222409 | orchestrator | 2025-06-05 19:23:09.222427 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-05 19:23:09.222942 | orchestrator | Thursday 05 June 2025 19:23:09 +0000 (0:00:33.723) 0:04:34.607 ********* 2025-06-05 19:23:17.274765 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:17.277485 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:17.278876 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:17.279272 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:17.281255 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:17.282266 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:17.283468 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:17.284767 | orchestrator | 2025-06-05 19:23:17.285253 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-05 19:23:17.286668 | orchestrator | Thursday 05 June 2025 19:23:17 +0000 (0:00:08.058) 0:04:42.666 ********* 2025-06-05 19:23:24.544000 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:24.544269 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:24.544783 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:24.546511 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:24.547372 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:24.547881 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:24.548576 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:24.548980 | orchestrator | 2025-06-05 19:23:24.549769 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-05 19:23:24.550463 | orchestrator | Thursday 05 June 2025 19:23:24 +0000 (0:00:07.268) 0:04:49.934 ********* 2025-06-05 19:23:26.163033 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:26.163115 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:26.163138 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:26.164563 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:26.165657 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:26.166941 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:26.167865 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:26.168806 | orchestrator | 2025-06-05 19:23:26.169595 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-05 19:23:26.170645 | orchestrator | Thursday 05 June 2025 19:23:26 +0000 (0:00:01.616) 0:04:51.550 ********* 2025-06-05 19:23:31.449942 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:31.450725 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:31.450761 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:31.451184 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:31.452071 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:31.453081 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:31.454173 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:31.455320 | orchestrator | 2025-06-05 19:23:31.455815 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-05 19:23:31.456035 | orchestrator | Thursday 05 June 2025 19:23:31 +0000 (0:00:05.289) 0:04:56.840 ********* 2025-06-05 19:23:31.831428 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:23:31.831566 | orchestrator | 2025-06-05 19:23:31.831651 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-05 19:23:31.832169 | orchestrator | Thursday 05 June 2025 19:23:31 +0000 (0:00:00.383) 0:04:57.223 ********* 2025-06-05 19:23:32.517688 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:32.517919 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:32.518199 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:32.519075 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:32.520537 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:32.520850 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:32.521535 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:32.523069 | orchestrator | 2025-06-05 19:23:32.523797 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-05 19:23:32.524469 | orchestrator | Thursday 05 June 2025 19:23:32 +0000 (0:00:00.685) 0:04:57.909 ********* 2025-06-05 19:23:34.051214 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:34.051899 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:34.052015 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:34.053187 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:34.053916 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:34.054635 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:34.055606 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:34.055861 | orchestrator | 2025-06-05 19:23:34.056845 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-05 19:23:34.058192 | orchestrator | Thursday 05 June 2025 19:23:34 +0000 (0:00:01.532) 0:04:59.441 ********* 2025-06-05 19:23:34.806840 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:34.809501 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:34.809572 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:34.810203 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:34.811521 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:34.812477 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:34.813079 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:34.813500 | orchestrator | 2025-06-05 19:23:34.814261 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-05 19:23:34.815069 | orchestrator | Thursday 05 June 2025 19:23:34 +0000 (0:00:00.757) 0:05:00.199 ********* 2025-06-05 19:23:34.910504 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:34.939912 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:34.990679 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:35.025087 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:35.087989 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:35.088578 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:35.092917 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:35.093257 | orchestrator | 2025-06-05 19:23:35.095602 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-05 19:23:35.096386 | orchestrator | Thursday 05 June 2025 19:23:35 +0000 (0:00:00.281) 0:05:00.480 ********* 2025-06-05 19:23:35.166248 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:35.199243 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:35.230498 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:35.260325 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:35.289888 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:35.454411 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:35.455244 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:35.458549 | orchestrator | 2025-06-05 19:23:35.458578 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-05 19:23:35.459316 | orchestrator | Thursday 05 June 2025 19:23:35 +0000 (0:00:00.366) 0:05:00.847 ********* 2025-06-05 19:23:35.563281 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:35.596812 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:35.629778 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:35.665167 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:35.733288 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:35.733516 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:35.734101 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:35.734907 | orchestrator | 2025-06-05 19:23:35.735084 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-05 19:23:35.736441 | orchestrator | Thursday 05 June 2025 19:23:35 +0000 (0:00:00.280) 0:05:01.127 ********* 2025-06-05 19:23:35.818980 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:35.851574 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:35.884824 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:35.916013 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:35.950223 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:36.006868 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:36.007984 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:36.008784 | orchestrator | 2025-06-05 19:23:36.009783 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-05 19:23:36.010801 | orchestrator | Thursday 05 June 2025 19:23:36 +0000 (0:00:00.272) 0:05:01.400 ********* 2025-06-05 19:23:36.107858 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:36.142638 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:36.178279 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:36.234061 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:36.329425 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:36.330166 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:36.331340 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:36.332213 | orchestrator | 2025-06-05 19:23:36.332912 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-05 19:23:36.333496 | orchestrator | Thursday 05 June 2025 19:23:36 +0000 (0:00:00.321) 0:05:01.722 ********* 2025-06-05 19:23:36.437262 | orchestrator | ok: [testbed-manager] =>  2025-06-05 19:23:36.437437 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.467963 | orchestrator | ok: [testbed-node-3] =>  2025-06-05 19:23:36.469075 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.499487 | orchestrator | ok: [testbed-node-4] =>  2025-06-05 19:23:36.500057 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.532642 | orchestrator | ok: [testbed-node-5] =>  2025-06-05 19:23:36.533254 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.598936 | orchestrator | ok: [testbed-node-0] =>  2025-06-05 19:23:36.600089 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.601564 | orchestrator | ok: [testbed-node-1] =>  2025-06-05 19:23:36.601955 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.602597 | orchestrator | ok: [testbed-node-2] =>  2025-06-05 19:23:36.603223 | orchestrator |  docker_version: 5:27.5.1 2025-06-05 19:23:36.604077 | orchestrator | 2025-06-05 19:23:36.604848 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-05 19:23:36.605389 | orchestrator | Thursday 05 June 2025 19:23:36 +0000 (0:00:00.270) 0:05:01.992 ********* 2025-06-05 19:23:36.692479 | orchestrator | ok: [testbed-manager] =>  2025-06-05 19:23:36.692924 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.741684 | orchestrator | ok: [testbed-node-3] =>  2025-06-05 19:23:36.743498 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.882304 | orchestrator | ok: [testbed-node-4] =>  2025-06-05 19:23:36.882589 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.917926 | orchestrator | ok: [testbed-node-5] =>  2025-06-05 19:23:36.922620 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.991671 | orchestrator | ok: [testbed-node-0] =>  2025-06-05 19:23:36.992722 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.993627 | orchestrator | ok: [testbed-node-1] =>  2025-06-05 19:23:36.994283 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.994858 | orchestrator | ok: [testbed-node-2] =>  2025-06-05 19:23:36.995871 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-05 19:23:36.996595 | orchestrator | 2025-06-05 19:23:36.996962 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-05 19:23:36.997477 | orchestrator | Thursday 05 June 2025 19:23:36 +0000 (0:00:00.391) 0:05:02.384 ********* 2025-06-05 19:23:37.061268 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:37.136969 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:37.169872 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:37.201606 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:37.251832 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:37.251898 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:37.253051 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:37.254089 | orchestrator | 2025-06-05 19:23:37.254515 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-05 19:23:37.255793 | orchestrator | Thursday 05 June 2025 19:23:37 +0000 (0:00:00.260) 0:05:02.644 ********* 2025-06-05 19:23:37.378710 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:37.412026 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:37.443774 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:37.475535 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:37.528071 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:37.529254 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:37.529942 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:37.533955 | orchestrator | 2025-06-05 19:23:37.536216 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-05 19:23:37.536594 | orchestrator | Thursday 05 June 2025 19:23:37 +0000 (0:00:00.275) 0:05:02.920 ********* 2025-06-05 19:23:37.937751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:23:37.937883 | orchestrator | 2025-06-05 19:23:37.938711 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-05 19:23:37.939345 | orchestrator | Thursday 05 June 2025 19:23:37 +0000 (0:00:00.409) 0:05:03.330 ********* 2025-06-05 19:23:38.773006 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:38.773682 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:38.774437 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:38.775721 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:38.776597 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:38.777335 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:38.777989 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:38.778630 | orchestrator | 2025-06-05 19:23:38.779207 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-05 19:23:38.779935 | orchestrator | Thursday 05 June 2025 19:23:38 +0000 (0:00:00.834) 0:05:04.164 ********* 2025-06-05 19:23:41.513286 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:23:41.513921 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:23:41.514402 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:23:41.515844 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:23:41.515867 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:41.516245 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:23:41.516921 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:23:41.517392 | orchestrator | 2025-06-05 19:23:41.517978 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-05 19:23:41.518718 | orchestrator | Thursday 05 June 2025 19:23:41 +0000 (0:00:02.741) 0:05:06.906 ********* 2025-06-05 19:23:41.591695 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-05 19:23:41.591912 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-05 19:23:41.661263 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-05 19:23:41.662351 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-05 19:23:41.666534 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-05 19:23:41.734605 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:23:41.736099 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-05 19:23:41.737746 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-05 19:23:41.738638 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-05 19:23:41.739025 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-05 19:23:41.811193 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:23:41.815801 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-05 19:23:41.815836 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-05 19:23:41.815906 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-05 19:23:42.054421 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:23:42.058706 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-05 19:23:42.058750 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-05 19:23:42.058764 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-05 19:23:42.126303 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:23:42.127428 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-05 19:23:42.128039 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-05 19:23:42.128649 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-05 19:23:42.259270 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:23:42.261456 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:23:42.263094 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-05 19:23:42.264461 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-05 19:23:42.265836 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-05 19:23:42.267234 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:23:42.267763 | orchestrator | 2025-06-05 19:23:42.268942 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-05 19:23:42.269299 | orchestrator | Thursday 05 June 2025 19:23:42 +0000 (0:00:00.743) 0:05:07.649 ********* 2025-06-05 19:23:48.465208 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:48.465327 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:48.466337 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:48.467592 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:48.470334 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:48.470552 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:48.471583 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:48.472776 | orchestrator | 2025-06-05 19:23:48.473440 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-05 19:23:48.474231 | orchestrator | Thursday 05 June 2025 19:23:48 +0000 (0:00:06.206) 0:05:13.856 ********* 2025-06-05 19:23:49.504839 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:49.504963 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:49.505307 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:49.505530 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:49.506096 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:49.506492 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:49.506825 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:49.507279 | orchestrator | 2025-06-05 19:23:49.507756 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-05 19:23:49.510135 | orchestrator | Thursday 05 June 2025 19:23:49 +0000 (0:00:01.040) 0:05:14.896 ********* 2025-06-05 19:23:56.701461 | orchestrator | ok: [testbed-manager] 2025-06-05 19:23:56.701652 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:56.702754 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:56.703763 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:56.706635 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:56.706661 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:56.708274 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:56.709408 | orchestrator | 2025-06-05 19:23:56.710143 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-05 19:23:56.710839 | orchestrator | Thursday 05 June 2025 19:23:56 +0000 (0:00:07.193) 0:05:22.090 ********* 2025-06-05 19:23:59.831028 | orchestrator | changed: [testbed-manager] 2025-06-05 19:23:59.831791 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:23:59.832936 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:23:59.833378 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:23:59.835398 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:23:59.836464 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:23:59.837262 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:23:59.838136 | orchestrator | 2025-06-05 19:23:59.839173 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-05 19:23:59.839503 | orchestrator | Thursday 05 June 2025 19:23:59 +0000 (0:00:03.131) 0:05:25.221 ********* 2025-06-05 19:24:01.406616 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:01.407232 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:01.407948 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:01.410965 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:01.411006 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:01.411019 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:01.412000 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:01.413057 | orchestrator | 2025-06-05 19:24:01.413982 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-05 19:24:01.414557 | orchestrator | Thursday 05 June 2025 19:24:01 +0000 (0:00:01.575) 0:05:26.797 ********* 2025-06-05 19:24:02.707248 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:02.709163 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:02.709234 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:02.710568 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:02.711489 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:02.712079 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:02.712728 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:02.713365 | orchestrator | 2025-06-05 19:24:02.714006 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-05 19:24:02.714657 | orchestrator | Thursday 05 June 2025 19:24:02 +0000 (0:00:01.301) 0:05:28.099 ********* 2025-06-05 19:24:02.916686 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:02.983897 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:03.045885 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:03.111488 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:03.255761 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:03.256551 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:03.256947 | orchestrator | changed: [testbed-manager] 2025-06-05 19:24:03.257894 | orchestrator | 2025-06-05 19:24:03.258548 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-05 19:24:03.259701 | orchestrator | Thursday 05 June 2025 19:24:03 +0000 (0:00:00.550) 0:05:28.649 ********* 2025-06-05 19:24:12.801543 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:12.802205 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:12.805295 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:12.805734 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:12.806481 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:12.806981 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:12.807816 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:12.808228 | orchestrator | 2025-06-05 19:24:12.808752 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-05 19:24:12.809272 | orchestrator | Thursday 05 June 2025 19:24:12 +0000 (0:00:09.541) 0:05:38.190 ********* 2025-06-05 19:24:13.713567 | orchestrator | changed: [testbed-manager] 2025-06-05 19:24:13.715793 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:13.716685 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:13.717720 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:13.718900 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:13.720297 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:13.721304 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:13.721914 | orchestrator | 2025-06-05 19:24:13.722666 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-05 19:24:13.723265 | orchestrator | Thursday 05 June 2025 19:24:13 +0000 (0:00:00.913) 0:05:39.104 ********* 2025-06-05 19:24:22.547807 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:22.547928 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:22.549534 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:22.549636 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:22.549888 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:22.550680 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:22.551361 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:22.551951 | orchestrator | 2025-06-05 19:24:22.552903 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-05 19:24:22.554172 | orchestrator | Thursday 05 June 2025 19:24:22 +0000 (0:00:08.834) 0:05:47.938 ********* 2025-06-05 19:24:33.705041 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:33.705209 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:33.705488 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:33.706505 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:33.707101 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:33.707532 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:33.709157 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:33.709837 | orchestrator | 2025-06-05 19:24:33.710286 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-05 19:24:33.710938 | orchestrator | Thursday 05 June 2025 19:24:33 +0000 (0:00:11.155) 0:05:59.094 ********* 2025-06-05 19:24:34.049266 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-05 19:24:34.911411 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-05 19:24:34.911518 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-05 19:24:34.911822 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-05 19:24:34.912518 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-05 19:24:34.915811 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-05 19:24:34.915836 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-05 19:24:34.915849 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-05 19:24:34.917295 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-05 19:24:34.918145 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-05 19:24:34.918593 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-05 19:24:34.920249 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-05 19:24:34.920277 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-05 19:24:34.920585 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-05 19:24:34.921010 | orchestrator | 2025-06-05 19:24:34.921544 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-05 19:24:34.921800 | orchestrator | Thursday 05 June 2025 19:24:34 +0000 (0:00:01.205) 0:06:00.300 ********* 2025-06-05 19:24:35.038995 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:35.101544 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:35.167911 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:35.230318 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:35.289559 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:35.403613 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:35.404000 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:35.404555 | orchestrator | 2025-06-05 19:24:35.405757 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-05 19:24:35.406397 | orchestrator | Thursday 05 June 2025 19:24:35 +0000 (0:00:00.496) 0:06:00.796 ********* 2025-06-05 19:24:39.107159 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:39.107322 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:39.110331 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:39.111511 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:39.112352 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:39.113625 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:39.114117 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:39.114949 | orchestrator | 2025-06-05 19:24:39.115878 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-05 19:24:39.116116 | orchestrator | Thursday 05 June 2025 19:24:39 +0000 (0:00:03.699) 0:06:04.495 ********* 2025-06-05 19:24:39.236662 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:39.300626 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:39.361776 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:39.426440 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:39.485852 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:39.586496 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:39.586614 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:39.587739 | orchestrator | 2025-06-05 19:24:39.590699 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-05 19:24:39.590746 | orchestrator | Thursday 05 June 2025 19:24:39 +0000 (0:00:00.481) 0:06:04.977 ********* 2025-06-05 19:24:39.661957 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-05 19:24:39.662672 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-05 19:24:39.729830 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:39.730260 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-05 19:24:39.733944 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-05 19:24:39.798199 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:39.799164 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-05 19:24:39.799934 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-05 19:24:39.880226 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:39.880418 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-05 19:24:39.881722 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-05 19:24:39.953294 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:39.953480 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-05 19:24:39.956754 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-05 19:24:40.021295 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:40.025191 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-05 19:24:40.025223 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-05 19:24:40.135383 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:40.135676 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-05 19:24:40.139099 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-05 19:24:40.139149 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:40.139163 | orchestrator | 2025-06-05 19:24:40.139374 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-05 19:24:40.140540 | orchestrator | Thursday 05 June 2025 19:24:40 +0000 (0:00:00.550) 0:06:05.527 ********* 2025-06-05 19:24:40.269564 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:40.338904 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:40.399955 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:40.462887 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:40.530544 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:40.620879 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:40.621662 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:40.622732 | orchestrator | 2025-06-05 19:24:40.625852 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-05 19:24:40.625886 | orchestrator | Thursday 05 June 2025 19:24:40 +0000 (0:00:00.484) 0:06:06.012 ********* 2025-06-05 19:24:40.759222 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:40.828989 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:40.883835 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:40.951181 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:41.010434 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:41.100353 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:41.101013 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:41.102206 | orchestrator | 2025-06-05 19:24:41.105806 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-05 19:24:41.105859 | orchestrator | Thursday 05 June 2025 19:24:41 +0000 (0:00:00.480) 0:06:06.492 ********* 2025-06-05 19:24:41.235630 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:41.301192 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:24:41.371095 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:24:41.599477 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:24:41.668663 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:24:41.787314 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:24:41.787702 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:24:41.789175 | orchestrator | 2025-06-05 19:24:41.791617 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-05 19:24:41.791869 | orchestrator | Thursday 05 June 2025 19:24:41 +0000 (0:00:00.685) 0:06:07.178 ********* 2025-06-05 19:24:43.453236 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:43.454173 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:43.455328 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:43.455876 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:43.458140 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:43.458574 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:43.459943 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:43.460944 | orchestrator | 2025-06-05 19:24:43.461169 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-05 19:24:43.462230 | orchestrator | Thursday 05 June 2025 19:24:43 +0000 (0:00:01.665) 0:06:08.844 ********* 2025-06-05 19:24:44.280786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:24:44.282475 | orchestrator | 2025-06-05 19:24:44.283326 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-05 19:24:44.284997 | orchestrator | Thursday 05 June 2025 19:24:44 +0000 (0:00:00.827) 0:06:09.671 ********* 2025-06-05 19:24:45.169924 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:45.171041 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:45.171878 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:45.172739 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:45.173536 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:45.174316 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:45.174337 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:45.177919 | orchestrator | 2025-06-05 19:24:45.177993 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-05 19:24:45.178009 | orchestrator | Thursday 05 June 2025 19:24:45 +0000 (0:00:00.889) 0:06:10.560 ********* 2025-06-05 19:24:45.665148 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:45.731292 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:46.344686 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:46.345481 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:46.346527 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:46.349586 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:46.350656 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:46.351341 | orchestrator | 2025-06-05 19:24:46.352587 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-05 19:24:46.353535 | orchestrator | Thursday 05 June 2025 19:24:46 +0000 (0:00:01.176) 0:06:11.737 ********* 2025-06-05 19:24:47.675163 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:47.675896 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:47.676683 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:47.678245 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:47.679216 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:47.680215 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:47.681792 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:47.682352 | orchestrator | 2025-06-05 19:24:47.683533 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-05 19:24:47.683793 | orchestrator | Thursday 05 June 2025 19:24:47 +0000 (0:00:01.330) 0:06:13.067 ********* 2025-06-05 19:24:47.803221 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:24:49.069383 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:49.069464 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:49.070706 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:49.071276 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:49.073242 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:49.075159 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:49.075909 | orchestrator | 2025-06-05 19:24:49.076731 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-05 19:24:49.077778 | orchestrator | Thursday 05 June 2025 19:24:49 +0000 (0:00:01.389) 0:06:14.457 ********* 2025-06-05 19:24:50.425280 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:50.425849 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:50.427229 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:50.428245 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:50.429144 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:50.429942 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:50.430849 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:50.431643 | orchestrator | 2025-06-05 19:24:50.432248 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-05 19:24:50.433101 | orchestrator | Thursday 05 June 2025 19:24:50 +0000 (0:00:01.356) 0:06:15.814 ********* 2025-06-05 19:24:51.836110 | orchestrator | changed: [testbed-manager] 2025-06-05 19:24:51.836415 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:24:51.838780 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:24:51.838832 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:24:51.838846 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:24:51.839819 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:24:51.840489 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:24:51.841279 | orchestrator | 2025-06-05 19:24:51.842156 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-05 19:24:51.844071 | orchestrator | Thursday 05 June 2025 19:24:51 +0000 (0:00:01.413) 0:06:17.228 ********* 2025-06-05 19:24:52.884609 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:24:52.885385 | orchestrator | 2025-06-05 19:24:52.888777 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-05 19:24:52.888827 | orchestrator | Thursday 05 June 2025 19:24:52 +0000 (0:00:01.048) 0:06:18.276 ********* 2025-06-05 19:24:54.309115 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:54.309225 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:54.309923 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:54.311380 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:54.311784 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:54.312904 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:54.313355 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:54.313866 | orchestrator | 2025-06-05 19:24:54.314331 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-05 19:24:54.314923 | orchestrator | Thursday 05 June 2025 19:24:54 +0000 (0:00:01.423) 0:06:19.699 ********* 2025-06-05 19:24:55.437242 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:55.437990 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:55.438148 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:55.439149 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:55.439986 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:55.440672 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:55.441307 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:55.441918 | orchestrator | 2025-06-05 19:24:55.442530 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-05 19:24:55.443079 | orchestrator | Thursday 05 June 2025 19:24:55 +0000 (0:00:01.127) 0:06:20.827 ********* 2025-06-05 19:24:56.800015 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:56.800341 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:56.801713 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:56.801979 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:56.804321 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:56.805976 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:56.806010 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:56.806090 | orchestrator | 2025-06-05 19:24:56.806496 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-05 19:24:56.807118 | orchestrator | Thursday 05 June 2025 19:24:56 +0000 (0:00:01.362) 0:06:22.190 ********* 2025-06-05 19:24:57.937880 | orchestrator | ok: [testbed-manager] 2025-06-05 19:24:57.939830 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:24:57.940151 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:24:57.942377 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:24:57.943150 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:24:57.943885 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:24:57.944474 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:24:57.945513 | orchestrator | 2025-06-05 19:24:57.946190 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-05 19:24:57.946667 | orchestrator | Thursday 05 June 2025 19:24:57 +0000 (0:00:01.137) 0:06:23.327 ********* 2025-06-05 19:24:59.060516 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:24:59.061213 | orchestrator | 2025-06-05 19:24:59.066197 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.066281 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.842) 0:06:24.169 ********* 2025-06-05 19:24:59.066294 | orchestrator | 2025-06-05 19:24:59.066588 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.067659 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.038) 0:06:24.208 ********* 2025-06-05 19:24:59.070718 | orchestrator | 2025-06-05 19:24:59.071645 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.072447 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.045) 0:06:24.254 ********* 2025-06-05 19:24:59.073327 | orchestrator | 2025-06-05 19:24:59.074414 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.075302 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.037) 0:06:24.292 ********* 2025-06-05 19:24:59.075824 | orchestrator | 2025-06-05 19:24:59.076488 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.076962 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.037) 0:06:24.329 ********* 2025-06-05 19:24:59.077539 | orchestrator | 2025-06-05 19:24:59.078376 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.078795 | orchestrator | Thursday 05 June 2025 19:24:58 +0000 (0:00:00.044) 0:06:24.373 ********* 2025-06-05 19:24:59.079519 | orchestrator | 2025-06-05 19:24:59.080036 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-05 19:24:59.080668 | orchestrator | Thursday 05 June 2025 19:24:59 +0000 (0:00:00.037) 0:06:24.411 ********* 2025-06-05 19:24:59.081138 | orchestrator | 2025-06-05 19:24:59.081671 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-05 19:24:59.082246 | orchestrator | Thursday 05 June 2025 19:24:59 +0000 (0:00:00.038) 0:06:24.449 ********* 2025-06-05 19:25:00.534386 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:00.534483 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:00.535189 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:00.535830 | orchestrator | 2025-06-05 19:25:00.536537 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-05 19:25:00.536972 | orchestrator | Thursday 05 June 2025 19:25:00 +0000 (0:00:01.474) 0:06:25.924 ********* 2025-06-05 19:25:01.874707 | orchestrator | changed: [testbed-manager] 2025-06-05 19:25:01.875342 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:01.876221 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:01.876429 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:01.876968 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:01.877641 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:01.878117 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:01.878558 | orchestrator | 2025-06-05 19:25:01.879017 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-05 19:25:01.879410 | orchestrator | Thursday 05 June 2025 19:25:01 +0000 (0:00:01.339) 0:06:27.263 ********* 2025-06-05 19:25:03.034317 | orchestrator | changed: [testbed-manager] 2025-06-05 19:25:03.035083 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:03.036660 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:03.037919 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:03.041676 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:03.043345 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:03.044225 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:03.045730 | orchestrator | 2025-06-05 19:25:03.047152 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-05 19:25:03.047999 | orchestrator | Thursday 05 June 2025 19:25:03 +0000 (0:00:01.159) 0:06:28.423 ********* 2025-06-05 19:25:03.162328 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:05.492338 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:05.492617 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:05.493645 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:05.494592 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:05.495705 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:05.496633 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:05.498214 | orchestrator | 2025-06-05 19:25:05.498977 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-05 19:25:05.499805 | orchestrator | Thursday 05 June 2025 19:25:05 +0000 (0:00:02.458) 0:06:30.881 ********* 2025-06-05 19:25:05.592913 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:05.593927 | orchestrator | 2025-06-05 19:25:05.594191 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-05 19:25:05.594333 | orchestrator | Thursday 05 June 2025 19:25:05 +0000 (0:00:00.103) 0:06:30.985 ********* 2025-06-05 19:25:06.620395 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:06.621013 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:06.621920 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:06.624982 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:06.626151 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:06.626732 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:06.627650 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:06.628461 | orchestrator | 2025-06-05 19:25:06.629250 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-05 19:25:06.629792 | orchestrator | Thursday 05 June 2025 19:25:06 +0000 (0:00:01.025) 0:06:32.010 ********* 2025-06-05 19:25:06.951573 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:07.018312 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:07.083840 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:07.159249 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:07.221187 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:07.343472 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:07.343681 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:07.344774 | orchestrator | 2025-06-05 19:25:07.345771 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-05 19:25:07.346659 | orchestrator | Thursday 05 June 2025 19:25:07 +0000 (0:00:00.724) 0:06:32.735 ********* 2025-06-05 19:25:08.252211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:25:08.252531 | orchestrator | 2025-06-05 19:25:08.253404 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-05 19:25:08.254310 | orchestrator | Thursday 05 June 2025 19:25:08 +0000 (0:00:00.909) 0:06:33.644 ********* 2025-06-05 19:25:08.667912 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:09.121022 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:09.121290 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:09.122009 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:09.122622 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:09.123175 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:09.125197 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:09.125598 | orchestrator | 2025-06-05 19:25:09.126117 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-05 19:25:09.126636 | orchestrator | Thursday 05 June 2025 19:25:09 +0000 (0:00:00.869) 0:06:34.514 ********* 2025-06-05 19:25:11.812408 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-05 19:25:11.812789 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-05 19:25:11.813752 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-05 19:25:11.815433 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-05 19:25:11.818420 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-05 19:25:11.818621 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-05 19:25:11.819668 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-05 19:25:11.820439 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-05 19:25:11.821235 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-05 19:25:11.821986 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-05 19:25:11.822624 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-05 19:25:11.823500 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-05 19:25:11.824411 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-05 19:25:11.825269 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-05 19:25:11.825691 | orchestrator | 2025-06-05 19:25:11.826413 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-05 19:25:11.826934 | orchestrator | Thursday 05 June 2025 19:25:11 +0000 (0:00:02.688) 0:06:37.202 ********* 2025-06-05 19:25:11.951378 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:12.012485 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:12.081901 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:12.144554 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:12.206697 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:12.320843 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:12.321020 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:12.322119 | orchestrator | 2025-06-05 19:25:12.324474 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-05 19:25:12.324494 | orchestrator | Thursday 05 June 2025 19:25:12 +0000 (0:00:00.510) 0:06:37.713 ********* 2025-06-05 19:25:13.136761 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:25:13.139577 | orchestrator | 2025-06-05 19:25:13.140782 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-05 19:25:13.141908 | orchestrator | Thursday 05 June 2025 19:25:13 +0000 (0:00:00.813) 0:06:38.527 ********* 2025-06-05 19:25:13.691619 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:13.776290 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:14.228701 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:14.229555 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:14.231395 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:14.231552 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:14.232340 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:14.233164 | orchestrator | 2025-06-05 19:25:14.233895 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-05 19:25:14.234834 | orchestrator | Thursday 05 June 2025 19:25:14 +0000 (0:00:01.091) 0:06:39.618 ********* 2025-06-05 19:25:14.643490 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:15.046393 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:15.047299 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:15.050641 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:15.050694 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:15.050923 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:15.052126 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:15.052888 | orchestrator | 2025-06-05 19:25:15.053890 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-05 19:25:15.054267 | orchestrator | Thursday 05 June 2025 19:25:15 +0000 (0:00:00.817) 0:06:40.436 ********* 2025-06-05 19:25:15.180568 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:15.245005 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:15.309557 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:15.376509 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:15.440509 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:15.539919 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:15.540020 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:15.542195 | orchestrator | 2025-06-05 19:25:15.542388 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-05 19:25:15.543185 | orchestrator | Thursday 05 June 2025 19:25:15 +0000 (0:00:00.491) 0:06:40.927 ********* 2025-06-05 19:25:17.158541 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:17.159558 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:17.162619 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:17.163502 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:17.164721 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:17.165387 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:17.166426 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:17.167681 | orchestrator | 2025-06-05 19:25:17.167992 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-05 19:25:17.169760 | orchestrator | Thursday 05 June 2025 19:25:17 +0000 (0:00:01.622) 0:06:42.550 ********* 2025-06-05 19:25:17.283760 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:17.352536 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:17.414482 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:17.474344 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:17.539317 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:17.621901 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:17.622219 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:17.623258 | orchestrator | 2025-06-05 19:25:17.624448 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-05 19:25:17.625319 | orchestrator | Thursday 05 June 2025 19:25:17 +0000 (0:00:00.462) 0:06:43.013 ********* 2025-06-05 19:25:25.352000 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:25.352996 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:25.353775 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:25.356379 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:25.358663 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:25.359728 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:25.360860 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:25.361542 | orchestrator | 2025-06-05 19:25:25.362607 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-05 19:25:25.363250 | orchestrator | Thursday 05 June 2025 19:25:25 +0000 (0:00:07.725) 0:06:50.738 ********* 2025-06-05 19:25:26.842015 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:26.842442 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:26.846502 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:26.847589 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:26.848642 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:26.849002 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:26.849861 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:26.850531 | orchestrator | 2025-06-05 19:25:26.851599 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-05 19:25:26.852060 | orchestrator | Thursday 05 June 2025 19:25:26 +0000 (0:00:01.494) 0:06:52.233 ********* 2025-06-05 19:25:28.584004 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:28.584154 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:28.584231 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:28.584753 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:28.585437 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:28.586967 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:28.587238 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:28.587791 | orchestrator | 2025-06-05 19:25:28.588080 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-05 19:25:28.589227 | orchestrator | Thursday 05 June 2025 19:25:28 +0000 (0:00:01.737) 0:06:53.971 ********* 2025-06-05 19:25:30.325518 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:30.326602 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:30.329592 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:30.329638 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:30.329650 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:30.330002 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:30.330718 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:30.331558 | orchestrator | 2025-06-05 19:25:30.332425 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-05 19:25:30.332710 | orchestrator | Thursday 05 June 2025 19:25:30 +0000 (0:00:01.743) 0:06:55.715 ********* 2025-06-05 19:25:30.776926 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:31.403473 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:31.404056 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:31.408144 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:31.408695 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:31.409921 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:31.413321 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:31.413894 | orchestrator | 2025-06-05 19:25:31.415250 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-05 19:25:31.415511 | orchestrator | Thursday 05 June 2025 19:25:31 +0000 (0:00:01.080) 0:06:56.795 ********* 2025-06-05 19:25:31.548997 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:31.622322 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:31.686303 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:31.757835 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:31.820339 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:32.219826 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:32.220535 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:32.222156 | orchestrator | 2025-06-05 19:25:32.222870 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-05 19:25:32.223463 | orchestrator | Thursday 05 June 2025 19:25:32 +0000 (0:00:00.808) 0:06:57.603 ********* 2025-06-05 19:25:32.355003 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:32.422486 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:32.495353 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:32.557349 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:32.620857 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:32.718751 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:32.719581 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:32.722864 | orchestrator | 2025-06-05 19:25:32.722924 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-05 19:25:32.722939 | orchestrator | Thursday 05 June 2025 19:25:32 +0000 (0:00:00.506) 0:06:58.110 ********* 2025-06-05 19:25:32.843680 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:32.913599 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:32.976164 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:33.038731 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:33.279611 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:33.387976 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:33.389190 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:33.393014 | orchestrator | 2025-06-05 19:25:33.393154 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-05 19:25:33.393169 | orchestrator | Thursday 05 June 2025 19:25:33 +0000 (0:00:00.668) 0:06:58.778 ********* 2025-06-05 19:25:33.523363 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:33.586807 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:33.647282 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:33.714953 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:33.777728 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:33.874302 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:33.875112 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:33.876220 | orchestrator | 2025-06-05 19:25:33.877089 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-05 19:25:33.877910 | orchestrator | Thursday 05 June 2025 19:25:33 +0000 (0:00:00.487) 0:06:59.265 ********* 2025-06-05 19:25:34.009771 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:34.079264 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:34.152100 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:34.220145 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:34.278775 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:34.385227 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:34.386401 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:34.388263 | orchestrator | 2025-06-05 19:25:34.388816 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-05 19:25:34.389950 | orchestrator | Thursday 05 June 2025 19:25:34 +0000 (0:00:00.510) 0:06:59.776 ********* 2025-06-05 19:25:39.991347 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:39.992676 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:39.993445 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:39.994320 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:39.995389 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:39.995600 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:39.996120 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:39.996520 | orchestrator | 2025-06-05 19:25:39.997038 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-05 19:25:39.997580 | orchestrator | Thursday 05 June 2025 19:25:39 +0000 (0:00:05.606) 0:07:05.382 ********* 2025-06-05 19:25:40.196195 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:25:40.263495 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:25:40.337962 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:25:40.396862 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:25:40.518528 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:25:40.518954 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:25:40.520429 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:25:40.521373 | orchestrator | 2025-06-05 19:25:40.522194 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-05 19:25:40.525268 | orchestrator | Thursday 05 June 2025 19:25:40 +0000 (0:00:00.527) 0:07:05.910 ********* 2025-06-05 19:25:41.478872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:25:41.479642 | orchestrator | 2025-06-05 19:25:41.480795 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-05 19:25:41.481829 | orchestrator | Thursday 05 June 2025 19:25:41 +0000 (0:00:00.960) 0:07:06.870 ********* 2025-06-05 19:25:43.253536 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:43.253891 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:43.255075 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:43.255893 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:43.256060 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:43.256860 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:43.257724 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:43.259836 | orchestrator | 2025-06-05 19:25:43.259869 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-05 19:25:43.259883 | orchestrator | Thursday 05 June 2025 19:25:43 +0000 (0:00:01.773) 0:07:08.643 ********* 2025-06-05 19:25:44.434132 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:44.434532 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:44.435575 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:44.440568 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:44.441841 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:44.443190 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:44.444058 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:44.445059 | orchestrator | 2025-06-05 19:25:44.446090 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-05 19:25:44.446936 | orchestrator | Thursday 05 June 2025 19:25:44 +0000 (0:00:01.182) 0:07:09.825 ********* 2025-06-05 19:25:45.560691 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:45.562284 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:45.562663 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:45.563596 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:45.564534 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:45.564989 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:45.565545 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:45.566263 | orchestrator | 2025-06-05 19:25:45.566768 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-05 19:25:45.567563 | orchestrator | Thursday 05 June 2025 19:25:45 +0000 (0:00:01.123) 0:07:10.949 ********* 2025-06-05 19:25:47.231431 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.231536 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.231550 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.232321 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.233463 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.233756 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.234348 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-05 19:25:47.234450 | orchestrator | 2025-06-05 19:25:47.235272 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-05 19:25:47.235528 | orchestrator | Thursday 05 June 2025 19:25:47 +0000 (0:00:01.671) 0:07:12.620 ********* 2025-06-05 19:25:48.000597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:25:48.000704 | orchestrator | 2025-06-05 19:25:48.003864 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-05 19:25:48.003894 | orchestrator | Thursday 05 June 2025 19:25:47 +0000 (0:00:00.769) 0:07:13.390 ********* 2025-06-05 19:25:56.595514 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:25:56.596236 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:25:56.598316 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:25:56.599691 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:25:56.600987 | orchestrator | changed: [testbed-manager] 2025-06-05 19:25:56.602640 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:25:56.603636 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:25:56.604787 | orchestrator | 2025-06-05 19:25:56.605433 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-05 19:25:56.606131 | orchestrator | Thursday 05 June 2025 19:25:56 +0000 (0:00:08.592) 0:07:21.982 ********* 2025-06-05 19:25:58.295728 | orchestrator | ok: [testbed-manager] 2025-06-05 19:25:58.295902 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:58.297446 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:58.299118 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:58.300324 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:58.302109 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:58.302554 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:58.303301 | orchestrator | 2025-06-05 19:25:58.304248 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-05 19:25:58.305143 | orchestrator | Thursday 05 June 2025 19:25:58 +0000 (0:00:01.702) 0:07:23.685 ********* 2025-06-05 19:25:59.565722 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:25:59.565936 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:25:59.566804 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:25:59.567868 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:25:59.570453 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:25:59.570476 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:25:59.571557 | orchestrator | 2025-06-05 19:25:59.571761 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-05 19:25:59.572707 | orchestrator | Thursday 05 June 2025 19:25:59 +0000 (0:00:01.267) 0:07:24.953 ********* 2025-06-05 19:26:01.160965 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:01.161609 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:01.163924 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:01.164227 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:01.165132 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:01.165826 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:01.166484 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:01.167127 | orchestrator | 2025-06-05 19:26:01.167822 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-05 19:26:01.168199 | orchestrator | 2025-06-05 19:26:01.168719 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-05 19:26:01.169053 | orchestrator | Thursday 05 June 2025 19:26:01 +0000 (0:00:01.598) 0:07:26.552 ********* 2025-06-05 19:26:01.287694 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:26:01.348150 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:26:01.407424 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:26:01.472773 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:26:01.533164 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:26:01.645464 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:26:01.645651 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:26:01.645757 | orchestrator | 2025-06-05 19:26:01.645960 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-05 19:26:01.646638 | orchestrator | 2025-06-05 19:26:01.649384 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-05 19:26:01.650325 | orchestrator | Thursday 05 June 2025 19:26:01 +0000 (0:00:00.483) 0:07:27.035 ********* 2025-06-05 19:26:03.142566 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:03.142725 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:03.143025 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:03.143500 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:03.143772 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:03.144946 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:03.145143 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:03.147781 | orchestrator | 2025-06-05 19:26:03.147805 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-05 19:26:03.147817 | orchestrator | Thursday 05 June 2025 19:26:03 +0000 (0:00:01.495) 0:07:28.531 ********* 2025-06-05 19:26:05.107411 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:05.107530 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:05.107550 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:05.108829 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:05.109216 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:05.110384 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:05.111036 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:05.112178 | orchestrator | 2025-06-05 19:26:05.115799 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-05 19:26:05.115857 | orchestrator | Thursday 05 June 2025 19:26:05 +0000 (0:00:01.952) 0:07:30.483 ********* 2025-06-05 19:26:05.423245 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:26:05.486199 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:26:05.554411 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:26:05.616206 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:26:05.679622 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:26:06.052485 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:26:06.053037 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:26:06.054153 | orchestrator | 2025-06-05 19:26:06.060764 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-05 19:26:06.060809 | orchestrator | Thursday 05 June 2025 19:26:06 +0000 (0:00:00.961) 0:07:31.445 ********* 2025-06-05 19:26:07.275448 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:07.276344 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:07.278556 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:07.280878 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:07.283475 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:07.283589 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:07.283606 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:07.285098 | orchestrator | 2025-06-05 19:26:07.285741 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-05 19:26:07.288139 | orchestrator | 2025-06-05 19:26:07.289095 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-05 19:26:07.291284 | orchestrator | Thursday 05 June 2025 19:26:07 +0000 (0:00:01.221) 0:07:32.666 ********* 2025-06-05 19:26:08.237951 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:26:08.238939 | orchestrator | 2025-06-05 19:26:08.240016 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-05 19:26:08.240941 | orchestrator | Thursday 05 June 2025 19:26:08 +0000 (0:00:00.962) 0:07:33.629 ********* 2025-06-05 19:26:08.655200 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:09.084306 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:09.085023 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:09.087299 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:09.087487 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:09.088523 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:09.089946 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:09.090892 | orchestrator | 2025-06-05 19:26:09.092032 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-05 19:26:09.092490 | orchestrator | Thursday 05 June 2025 19:26:09 +0000 (0:00:00.846) 0:07:34.475 ********* 2025-06-05 19:26:10.205824 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:10.207320 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:10.208590 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:10.210415 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:10.211576 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:10.213059 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:10.214112 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:10.215339 | orchestrator | 2025-06-05 19:26:10.216628 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-05 19:26:10.217305 | orchestrator | Thursday 05 June 2025 19:26:10 +0000 (0:00:01.120) 0:07:35.595 ********* 2025-06-05 19:26:11.219285 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:26:11.220252 | orchestrator | 2025-06-05 19:26:11.221038 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-05 19:26:11.222090 | orchestrator | Thursday 05 June 2025 19:26:11 +0000 (0:00:01.013) 0:07:36.609 ********* 2025-06-05 19:26:11.615761 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:12.026949 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:12.027460 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:12.028024 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:12.029519 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:12.029613 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:12.030131 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:12.030804 | orchestrator | 2025-06-05 19:26:12.031396 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-05 19:26:12.032190 | orchestrator | Thursday 05 June 2025 19:26:12 +0000 (0:00:00.805) 0:07:37.415 ********* 2025-06-05 19:26:12.456544 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:13.122581 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:13.122810 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:13.124276 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:13.125186 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:13.126516 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:13.127250 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:13.128787 | orchestrator | 2025-06-05 19:26:13.129248 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:26:13.129774 | orchestrator | 2025-06-05 19:26:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:26:13.130011 | orchestrator | 2025-06-05 19:26:13 | INFO  | Please wait and do not abort execution. 2025-06-05 19:26:13.132385 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-05 19:26:13.132451 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-05 19:26:13.136202 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-05 19:26:13.136229 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-05 19:26:13.136241 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-05 19:26:13.136253 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-05 19:26:13.136784 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-05 19:26:13.137867 | orchestrator | 2025-06-05 19:26:13.138113 | orchestrator | 2025-06-05 19:26:13.138482 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:26:13.139246 | orchestrator | Thursday 05 June 2025 19:26:13 +0000 (0:00:01.100) 0:07:38.515 ********* 2025-06-05 19:26:13.140025 | orchestrator | =============================================================================== 2025-06-05 19:26:13.140403 | orchestrator | osism.commons.packages : Install required packages --------------------- 74.21s 2025-06-05 19:26:13.140953 | orchestrator | osism.commons.packages : Download required packages -------------------- 34.87s 2025-06-05 19:26:13.141408 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.72s 2025-06-05 19:26:13.141807 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.43s 2025-06-05 19:26:13.142464 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.72s 2025-06-05 19:26:13.142854 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.25s 2025-06-05 19:26:13.143272 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.16s 2025-06-05 19:26:13.143931 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.54s 2025-06-05 19:26:13.144234 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.83s 2025-06-05 19:26:13.144674 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.59s 2025-06-05 19:26:13.145061 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.34s 2025-06-05 19:26:13.145674 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.06s 2025-06-05 19:26:13.145920 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.73s 2025-06-05 19:26:13.146268 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.57s 2025-06-05 19:26:13.146904 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.27s 2025-06-05 19:26:13.147175 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.19s 2025-06-05 19:26:13.147541 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.21s 2025-06-05 19:26:13.147861 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.69s 2025-06-05 19:26:13.148240 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.67s 2025-06-05 19:26:13.148537 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.62s 2025-06-05 19:26:13.796604 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-05 19:26:13.796713 | orchestrator | + osism apply network 2025-06-05 19:26:15.913484 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:26:15.913572 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:26:15.913586 | orchestrator | Registering Redlock._release_script 2025-06-05 19:26:15.974476 | orchestrator | 2025-06-05 19:26:15 | INFO  | Task 73f09b35-ec4c-4eab-b5d1-552bb60dfb1c (network) was prepared for execution. 2025-06-05 19:26:15.974613 | orchestrator | 2025-06-05 19:26:15 | INFO  | It takes a moment until task 73f09b35-ec4c-4eab-b5d1-552bb60dfb1c (network) has been started and output is visible here. 2025-06-05 19:26:20.241847 | orchestrator | 2025-06-05 19:26:20.244972 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-05 19:26:20.245081 | orchestrator | 2025-06-05 19:26:20.246296 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-05 19:26:20.247917 | orchestrator | Thursday 05 June 2025 19:26:20 +0000 (0:00:00.275) 0:00:00.275 ********* 2025-06-05 19:26:20.387337 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:20.463557 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:20.539386 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:20.613657 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:20.808265 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:20.940059 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:20.941114 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:20.945195 | orchestrator | 2025-06-05 19:26:20.945303 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-05 19:26:20.945327 | orchestrator | Thursday 05 June 2025 19:26:20 +0000 (0:00:00.697) 0:00:00.973 ********* 2025-06-05 19:26:22.158722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:26:22.159738 | orchestrator | 2025-06-05 19:26:22.160437 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-05 19:26:22.162326 | orchestrator | Thursday 05 June 2025 19:26:22 +0000 (0:00:01.218) 0:00:02.191 ********* 2025-06-05 19:26:24.191768 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:24.192510 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:24.193168 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:24.194391 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:24.197803 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:24.199233 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:24.199345 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:24.200679 | orchestrator | 2025-06-05 19:26:24.204702 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-05 19:26:24.204723 | orchestrator | Thursday 05 June 2025 19:26:24 +0000 (0:00:02.033) 0:00:04.224 ********* 2025-06-05 19:26:25.950398 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:25.951079 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:25.951687 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:25.952661 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:25.953998 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:25.954308 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:25.955577 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:25.956395 | orchestrator | 2025-06-05 19:26:25.957357 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-05 19:26:25.958308 | orchestrator | Thursday 05 June 2025 19:26:25 +0000 (0:00:01.757) 0:00:05.982 ********* 2025-06-05 19:26:26.909075 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-05 19:26:26.910443 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-05 19:26:26.911613 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-05 19:26:26.912780 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-05 19:26:26.913625 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-05 19:26:26.914468 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-05 19:26:26.915191 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-05 19:26:26.916176 | orchestrator | 2025-06-05 19:26:26.916954 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-05 19:26:26.917388 | orchestrator | Thursday 05 June 2025 19:26:26 +0000 (0:00:00.960) 0:00:06.942 ********* 2025-06-05 19:26:30.299073 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:26:30.299492 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:26:30.302554 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-05 19:26:30.304103 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-05 19:26:30.307246 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 19:26:30.310717 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-05 19:26:30.311190 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-05 19:26:30.311830 | orchestrator | 2025-06-05 19:26:30.312452 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-05 19:26:30.313127 | orchestrator | Thursday 05 June 2025 19:26:30 +0000 (0:00:03.387) 0:00:10.330 ********* 2025-06-05 19:26:31.741484 | orchestrator | changed: [testbed-manager] 2025-06-05 19:26:31.741754 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:31.742796 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:31.744441 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:31.747815 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:31.747889 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:31.747903 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:31.747915 | orchestrator | 2025-06-05 19:26:31.748722 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-05 19:26:31.749377 | orchestrator | Thursday 05 June 2025 19:26:31 +0000 (0:00:01.447) 0:00:11.777 ********* 2025-06-05 19:26:32.825279 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:26:33.924855 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:26:33.924943 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-05 19:26:33.926186 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-05 19:26:33.927766 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 19:26:33.928496 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-05 19:26:33.929149 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-05 19:26:33.930172 | orchestrator | 2025-06-05 19:26:33.930814 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-05 19:26:33.931221 | orchestrator | Thursday 05 June 2025 19:26:33 +0000 (0:00:02.181) 0:00:13.958 ********* 2025-06-05 19:26:34.344542 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:35.078152 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:35.079831 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:35.080400 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:35.082247 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:35.083511 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:35.084570 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:35.085716 | orchestrator | 2025-06-05 19:26:35.087042 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-05 19:26:35.088495 | orchestrator | Thursday 05 June 2025 19:26:35 +0000 (0:00:01.150) 0:00:15.108 ********* 2025-06-05 19:26:35.247400 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:26:35.339356 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:26:35.421325 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:26:35.510489 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:26:35.597307 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:26:35.743155 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:26:35.749001 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:26:35.749076 | orchestrator | 2025-06-05 19:26:35.749086 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-05 19:26:35.749902 | orchestrator | Thursday 05 June 2025 19:26:35 +0000 (0:00:00.667) 0:00:15.776 ********* 2025-06-05 19:26:37.946950 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:37.947118 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:37.947924 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:37.949910 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:37.950512 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:37.953047 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:37.954137 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:37.955237 | orchestrator | 2025-06-05 19:26:37.956597 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-05 19:26:37.958158 | orchestrator | Thursday 05 June 2025 19:26:37 +0000 (0:00:02.199) 0:00:17.975 ********* 2025-06-05 19:26:38.248351 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:26:38.332841 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:26:38.421829 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:26:38.506657 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:26:38.912080 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:26:38.912858 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:26:38.912879 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-05 19:26:38.914498 | orchestrator | 2025-06-05 19:26:38.916091 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-05 19:26:38.917588 | orchestrator | Thursday 05 June 2025 19:26:38 +0000 (0:00:00.971) 0:00:18.947 ********* 2025-06-05 19:26:40.600414 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:40.601670 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:26:40.603631 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:26:40.606081 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:26:40.607566 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:26:40.611003 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:26:40.611118 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:26:40.611142 | orchestrator | 2025-06-05 19:26:40.611164 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-05 19:26:40.611184 | orchestrator | Thursday 05 June 2025 19:26:40 +0000 (0:00:01.682) 0:00:20.630 ********* 2025-06-05 19:26:41.864151 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:26:41.864258 | orchestrator | 2025-06-05 19:26:41.864274 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-05 19:26:41.864287 | orchestrator | Thursday 05 June 2025 19:26:41 +0000 (0:00:01.262) 0:00:21.892 ********* 2025-06-05 19:26:42.421869 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:42.834907 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:42.837158 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:42.839065 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:42.840072 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:42.841686 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:42.842737 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:42.843843 | orchestrator | 2025-06-05 19:26:42.844611 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-05 19:26:42.845442 | orchestrator | Thursday 05 June 2025 19:26:42 +0000 (0:00:00.977) 0:00:22.869 ********* 2025-06-05 19:26:43.216369 | orchestrator | ok: [testbed-manager] 2025-06-05 19:26:43.302480 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:26:43.386255 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:26:43.470795 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:26:43.552422 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:26:43.686374 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:26:43.687946 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:26:43.690891 | orchestrator | 2025-06-05 19:26:43.691020 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-05 19:26:43.691046 | orchestrator | Thursday 05 June 2025 19:26:43 +0000 (0:00:00.852) 0:00:23.721 ********* 2025-06-05 19:26:44.125057 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.125797 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.438396 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.439224 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.440126 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.441397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.903738 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.903848 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.904383 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.905972 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.907329 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.908077 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.909181 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-05 19:26:44.910061 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-05 19:26:44.910938 | orchestrator | 2025-06-05 19:26:44.911823 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-05 19:26:44.912368 | orchestrator | Thursday 05 June 2025 19:26:44 +0000 (0:00:01.212) 0:00:24.934 ********* 2025-06-05 19:26:45.066397 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:26:45.157527 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:26:45.246620 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:26:45.326285 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:26:45.405499 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:26:45.515628 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:26:45.516288 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:26:45.517057 | orchestrator | 2025-06-05 19:26:45.517723 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-05 19:26:45.518439 | orchestrator | Thursday 05 June 2025 19:26:45 +0000 (0:00:00.617) 0:00:25.552 ********* 2025-06-05 19:26:50.101561 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-node-3, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-4, testbed-node-5 2025-06-05 19:26:50.101656 | orchestrator | 2025-06-05 19:26:50.102582 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-05 19:26:50.103298 | orchestrator | Thursday 05 June 2025 19:26:50 +0000 (0:00:04.582) 0:00:30.134 ********* 2025-06-05 19:26:55.193358 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.193450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.193465 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.193477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.193490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.194872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.195528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.196197 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.196860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:26:55.197789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.198377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.198900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.199516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.199931 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:26:55.201141 | orchestrator | 2025-06-05 19:26:55.201936 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-05 19:26:55.202887 | orchestrator | Thursday 05 June 2025 19:26:55 +0000 (0:00:05.088) 0:00:35.222 ********* 2025-06-05 19:27:00.697828 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.699309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.701045 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.702463 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.704408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.705131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.706152 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.706676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.707381 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-05 19:27:00.708048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.708526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.709126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.709893 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.710163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-05 19:27:00.710900 | orchestrator | 2025-06-05 19:27:00.711425 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-05 19:27:00.711740 | orchestrator | Thursday 05 June 2025 19:27:00 +0000 (0:00:05.510) 0:00:40.732 ********* 2025-06-05 19:27:01.789149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:27:01.789287 | orchestrator | 2025-06-05 19:27:01.790171 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-05 19:27:01.790913 | orchestrator | Thursday 05 June 2025 19:27:01 +0000 (0:00:01.089) 0:00:41.821 ********* 2025-06-05 19:27:02.145032 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:02.889697 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:27:02.890635 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:27:02.892437 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:27:02.893474 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:27:02.894872 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:27:02.895718 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:27:02.897244 | orchestrator | 2025-06-05 19:27:02.898147 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-05 19:27:02.898838 | orchestrator | Thursday 05 June 2025 19:27:02 +0000 (0:00:01.102) 0:00:42.924 ********* 2025-06-05 19:27:02.971255 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:02.971523 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:02.972601 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.064915 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:03.065205 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:03.065228 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:03.065466 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.065697 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:03.165748 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:27:03.166117 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:03.166703 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:03.167241 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.167977 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:03.253738 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:27:03.254177 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:03.255162 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:03.257677 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.257715 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:03.343931 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:27:03.344794 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:03.345382 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:03.346544 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.349662 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:03.624073 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:27:03.624637 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:03.625438 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:03.626437 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:03.629634 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:04.876263 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:27:04.876770 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:27:04.881210 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-05 19:27:04.881239 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-05 19:27:04.881273 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-05 19:27:04.881286 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-05 19:27:04.881298 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:27:04.881310 | orchestrator | 2025-06-05 19:27:04.882929 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-05 19:27:04.883439 | orchestrator | Thursday 05 June 2025 19:27:04 +0000 (0:00:01.983) 0:00:44.908 ********* 2025-06-05 19:27:05.041445 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:27:05.124777 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:27:05.204373 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:27:05.285582 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:27:05.379496 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:27:05.497935 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:27:05.498489 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:27:05.499571 | orchestrator | 2025-06-05 19:27:05.501264 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-05 19:27:05.501506 | orchestrator | Thursday 05 June 2025 19:27:05 +0000 (0:00:00.626) 0:00:45.534 ********* 2025-06-05 19:27:05.659361 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:27:05.920303 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:27:06.000673 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:27:06.083188 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:27:06.165750 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:27:06.202142 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:27:06.202920 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:27:06.203998 | orchestrator | 2025-06-05 19:27:06.204890 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:27:06.206214 | orchestrator | 2025-06-05 19:27:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:27:06.206257 | orchestrator | 2025-06-05 19:27:06 | INFO  | Please wait and do not abort execution. 2025-06-05 19:27:06.207555 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:27:06.208516 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.209533 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.211242 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.212531 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.213147 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.214299 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:27:06.215809 | orchestrator | 2025-06-05 19:27:06.216277 | orchestrator | 2025-06-05 19:27:06.217205 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:27:06.217711 | orchestrator | Thursday 05 June 2025 19:27:06 +0000 (0:00:00.703) 0:00:46.237 ********* 2025-06-05 19:27:06.218557 | orchestrator | =============================================================================== 2025-06-05 19:27:06.219219 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.51s 2025-06-05 19:27:06.219905 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.09s 2025-06-05 19:27:06.220477 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.58s 2025-06-05 19:27:06.220850 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.39s 2025-06-05 19:27:06.221260 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.20s 2025-06-05 19:27:06.221797 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.18s 2025-06-05 19:27:06.222490 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.03s 2025-06-05 19:27:06.222676 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.98s 2025-06-05 19:27:06.223107 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.76s 2025-06-05 19:27:06.223776 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-06-05 19:27:06.224122 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.45s 2025-06-05 19:27:06.224572 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2025-06-05 19:27:06.224905 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2025-06-05 19:27:06.225303 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.21s 2025-06-05 19:27:06.225697 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.15s 2025-06-05 19:27:06.226123 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2025-06-05 19:27:06.226969 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.09s 2025-06-05 19:27:06.228030 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.98s 2025-06-05 19:27:06.228735 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.97s 2025-06-05 19:27:06.229109 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-06-05 19:27:06.872303 | orchestrator | + osism apply wireguard 2025-06-05 19:27:08.556461 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:27:08.556520 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:27:08.556533 | orchestrator | Registering Redlock._release_script 2025-06-05 19:27:08.616053 | orchestrator | 2025-06-05 19:27:08 | INFO  | Task 954cd188-3b53-49dc-b25d-334840b00126 (wireguard) was prepared for execution. 2025-06-05 19:27:08.616141 | orchestrator | 2025-06-05 19:27:08 | INFO  | It takes a moment until task 954cd188-3b53-49dc-b25d-334840b00126 (wireguard) has been started and output is visible here. 2025-06-05 19:27:12.510815 | orchestrator | 2025-06-05 19:27:12.512221 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-05 19:27:12.513864 | orchestrator | 2025-06-05 19:27:12.513903 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-05 19:27:12.514087 | orchestrator | Thursday 05 June 2025 19:27:12 +0000 (0:00:00.195) 0:00:00.195 ********* 2025-06-05 19:27:13.738142 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:13.738719 | orchestrator | 2025-06-05 19:27:13.739885 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-05 19:27:13.741272 | orchestrator | Thursday 05 June 2025 19:27:13 +0000 (0:00:01.228) 0:00:01.423 ********* 2025-06-05 19:27:19.349514 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:19.350538 | orchestrator | 2025-06-05 19:27:19.352138 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-05 19:27:19.352750 | orchestrator | Thursday 05 June 2025 19:27:19 +0000 (0:00:05.608) 0:00:07.031 ********* 2025-06-05 19:27:19.884284 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:19.884629 | orchestrator | 2025-06-05 19:27:19.886360 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-05 19:27:19.886670 | orchestrator | Thursday 05 June 2025 19:27:19 +0000 (0:00:00.536) 0:00:07.568 ********* 2025-06-05 19:27:20.299532 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:20.299638 | orchestrator | 2025-06-05 19:27:20.300282 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-05 19:27:20.302410 | orchestrator | Thursday 05 June 2025 19:27:20 +0000 (0:00:00.416) 0:00:07.984 ********* 2025-06-05 19:27:20.864782 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:20.865161 | orchestrator | 2025-06-05 19:27:20.865462 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-05 19:27:20.866071 | orchestrator | Thursday 05 June 2025 19:27:20 +0000 (0:00:00.563) 0:00:08.548 ********* 2025-06-05 19:27:21.395747 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:21.396011 | orchestrator | 2025-06-05 19:27:21.396247 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-05 19:27:21.396893 | orchestrator | Thursday 05 June 2025 19:27:21 +0000 (0:00:00.533) 0:00:09.081 ********* 2025-06-05 19:27:21.786924 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:21.787159 | orchestrator | 2025-06-05 19:27:21.788060 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-05 19:27:21.788881 | orchestrator | Thursday 05 June 2025 19:27:21 +0000 (0:00:00.390) 0:00:09.472 ********* 2025-06-05 19:27:23.030759 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:23.031037 | orchestrator | 2025-06-05 19:27:23.033256 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-05 19:27:23.033340 | orchestrator | Thursday 05 June 2025 19:27:23 +0000 (0:00:01.242) 0:00:10.714 ********* 2025-06-05 19:27:23.975370 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-05 19:27:23.976070 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:23.977619 | orchestrator | 2025-06-05 19:27:23.977648 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-05 19:27:23.977661 | orchestrator | Thursday 05 June 2025 19:27:23 +0000 (0:00:00.944) 0:00:11.659 ********* 2025-06-05 19:27:25.633052 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:25.633153 | orchestrator | 2025-06-05 19:27:25.634429 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-05 19:27:25.636818 | orchestrator | Thursday 05 June 2025 19:27:25 +0000 (0:00:01.658) 0:00:13.317 ********* 2025-06-05 19:27:26.610619 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:26.611582 | orchestrator | 2025-06-05 19:27:26.612525 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:27:26.613175 | orchestrator | 2025-06-05 19:27:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:27:26.613459 | orchestrator | 2025-06-05 19:27:26 | INFO  | Please wait and do not abort execution. 2025-06-05 19:27:26.614744 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:27:26.615680 | orchestrator | 2025-06-05 19:27:26.616301 | orchestrator | 2025-06-05 19:27:26.616943 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:27:26.617316 | orchestrator | Thursday 05 June 2025 19:27:26 +0000 (0:00:00.976) 0:00:14.294 ********* 2025-06-05 19:27:26.618056 | orchestrator | =============================================================================== 2025-06-05 19:27:26.618746 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.61s 2025-06-05 19:27:26.619515 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.66s 2025-06-05 19:27:26.619844 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.24s 2025-06-05 19:27:26.620951 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.23s 2025-06-05 19:27:26.621024 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.98s 2025-06-05 19:27:26.621761 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.94s 2025-06-05 19:27:26.622382 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.56s 2025-06-05 19:27:26.622936 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-05 19:27:26.623471 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-06-05 19:27:26.623818 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-05 19:27:26.624353 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.39s 2025-06-05 19:27:27.250683 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-05 19:27:27.293164 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-05 19:27:27.293253 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-05 19:27:27.384798 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 163 0 --:--:-- --:--:-- --:--:-- 164 2025-06-05 19:27:27.398998 | orchestrator | + osism apply --environment custom workarounds 2025-06-05 19:27:29.094556 | orchestrator | 2025-06-05 19:27:29 | INFO  | Trying to run play workarounds in environment custom 2025-06-05 19:27:29.099210 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:27:29.099262 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:27:29.099275 | orchestrator | Registering Redlock._release_script 2025-06-05 19:27:29.162199 | orchestrator | 2025-06-05 19:27:29 | INFO  | Task 6a433271-4367-49d7-88b4-4764bf5b6983 (workarounds) was prepared for execution. 2025-06-05 19:27:29.162302 | orchestrator | 2025-06-05 19:27:29 | INFO  | It takes a moment until task 6a433271-4367-49d7-88b4-4764bf5b6983 (workarounds) has been started and output is visible here. 2025-06-05 19:27:33.064063 | orchestrator | 2025-06-05 19:27:33.064248 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:27:33.064825 | orchestrator | 2025-06-05 19:27:33.065375 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-05 19:27:33.066237 | orchestrator | Thursday 05 June 2025 19:27:33 +0000 (0:00:00.143) 0:00:00.143 ********* 2025-06-05 19:27:33.230763 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-05 19:27:33.313394 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-05 19:27:33.412058 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-05 19:27:33.496136 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-05 19:27:33.689067 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-05 19:27:33.855214 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-05 19:27:33.855885 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-05 19:27:33.856588 | orchestrator | 2025-06-05 19:27:33.858721 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-05 19:27:33.859351 | orchestrator | 2025-06-05 19:27:33.859739 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-05 19:27:33.860290 | orchestrator | Thursday 05 June 2025 19:27:33 +0000 (0:00:00.794) 0:00:00.937 ********* 2025-06-05 19:27:36.141086 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:36.141195 | orchestrator | 2025-06-05 19:27:36.141211 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-05 19:27:36.141224 | orchestrator | 2025-06-05 19:27:36.142062 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-05 19:27:36.145220 | orchestrator | Thursday 05 June 2025 19:27:36 +0000 (0:00:02.278) 0:00:03.215 ********* 2025-06-05 19:27:38.052745 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:27:38.053260 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:27:38.057528 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:27:38.058322 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:27:38.058912 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:27:38.059561 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:27:38.062137 | orchestrator | 2025-06-05 19:27:38.062560 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-05 19:27:38.064871 | orchestrator | 2025-06-05 19:27:38.065273 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-05 19:27:38.068866 | orchestrator | Thursday 05 June 2025 19:27:38 +0000 (0:00:01.915) 0:00:05.130 ********* 2025-06-05 19:27:39.595487 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.595572 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.597001 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.597020 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.597028 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.597035 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-05 19:27:39.597643 | orchestrator | 2025-06-05 19:27:39.598203 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-05 19:27:39.599959 | orchestrator | Thursday 05 June 2025 19:27:39 +0000 (0:00:01.530) 0:00:06.661 ********* 2025-06-05 19:27:43.278739 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:27:43.279061 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:27:43.281518 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:27:43.281551 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:27:43.281824 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:27:43.282781 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:27:43.284313 | orchestrator | 2025-06-05 19:27:43.284337 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-05 19:27:43.286475 | orchestrator | Thursday 05 June 2025 19:27:43 +0000 (0:00:03.695) 0:00:10.356 ********* 2025-06-05 19:27:43.470849 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:27:43.580794 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:27:43.661829 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:27:43.758511 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:27:44.102722 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:27:44.103750 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:27:44.105423 | orchestrator | 2025-06-05 19:27:44.106518 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-05 19:27:44.108093 | orchestrator | 2025-06-05 19:27:44.109897 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-05 19:27:44.110808 | orchestrator | Thursday 05 June 2025 19:27:44 +0000 (0:00:00.825) 0:00:11.182 ********* 2025-06-05 19:27:45.863166 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:45.863291 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:27:45.864343 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:27:45.865053 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:27:45.870365 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:27:45.870405 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:27:45.870638 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:27:45.871427 | orchestrator | 2025-06-05 19:27:45.871734 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-05 19:27:45.872402 | orchestrator | Thursday 05 June 2025 19:27:45 +0000 (0:00:01.757) 0:00:12.939 ********* 2025-06-05 19:27:47.658517 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:47.659103 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:27:47.660305 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:27:47.662006 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:27:47.662107 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:27:47.662798 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:27:47.664044 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:27:47.665034 | orchestrator | 2025-06-05 19:27:47.666124 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-05 19:27:47.666873 | orchestrator | Thursday 05 June 2025 19:27:47 +0000 (0:00:01.790) 0:00:14.730 ********* 2025-06-05 19:27:49.345024 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:27:49.349324 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:27:49.349389 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:27:49.349403 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:49.350580 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:27:49.352011 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:27:49.353025 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:27:49.353684 | orchestrator | 2025-06-05 19:27:49.354793 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-05 19:27:49.356190 | orchestrator | Thursday 05 June 2025 19:27:49 +0000 (0:00:01.694) 0:00:16.424 ********* 2025-06-05 19:27:51.141289 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:27:51.143340 | orchestrator | changed: [testbed-manager] 2025-06-05 19:27:51.144225 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:27:51.145133 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:27:51.146192 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:27:51.146988 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:27:51.147892 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:27:51.148523 | orchestrator | 2025-06-05 19:27:51.149451 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-05 19:27:51.150052 | orchestrator | Thursday 05 June 2025 19:27:51 +0000 (0:00:01.792) 0:00:18.217 ********* 2025-06-05 19:27:51.338202 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:27:51.443814 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:27:51.637256 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:27:51.798827 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:27:51.925757 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:27:52.065287 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:27:52.066593 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:27:52.067612 | orchestrator | 2025-06-05 19:27:52.068744 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-05 19:27:52.069808 | orchestrator | 2025-06-05 19:27:52.070521 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-05 19:27:52.071481 | orchestrator | Thursday 05 June 2025 19:27:52 +0000 (0:00:00.929) 0:00:19.146 ********* 2025-06-05 19:27:55.225515 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:27:55.226101 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:27:55.226783 | orchestrator | ok: [testbed-manager] 2025-06-05 19:27:55.228625 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:27:55.230179 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:27:55.231016 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:27:55.231706 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:27:55.232946 | orchestrator | 2025-06-05 19:27:55.232975 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:27:55.233185 | orchestrator | 2025-06-05 19:27:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:27:55.233516 | orchestrator | 2025-06-05 19:27:55 | INFO  | Please wait and do not abort execution. 2025-06-05 19:27:55.234396 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:27:55.234663 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.235787 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.235829 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.236487 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.237224 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.237493 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:27:55.238244 | orchestrator | 2025-06-05 19:27:55.238561 | orchestrator | 2025-06-05 19:27:55.239109 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:27:55.239935 | orchestrator | Thursday 05 June 2025 19:27:55 +0000 (0:00:03.153) 0:00:22.300 ********* 2025-06-05 19:27:55.240197 | orchestrator | =============================================================================== 2025-06-05 19:27:55.240513 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.70s 2025-06-05 19:27:55.241155 | orchestrator | Install python3-docker -------------------------------------------------- 3.15s 2025-06-05 19:27:55.241454 | orchestrator | Apply netplan configuration --------------------------------------------- 2.28s 2025-06-05 19:27:55.242535 | orchestrator | Apply netplan configuration --------------------------------------------- 1.92s 2025-06-05 19:27:55.243024 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.79s 2025-06-05 19:27:55.243471 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.79s 2025-06-05 19:27:55.243824 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.76s 2025-06-05 19:27:55.244198 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.69s 2025-06-05 19:27:55.244521 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-06-05 19:27:55.244868 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.93s 2025-06-05 19:27:55.245745 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.83s 2025-06-05 19:27:55.245883 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-06-05 19:27:55.848100 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-05 19:27:57.557805 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:27:57.557995 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:27:57.558096 | orchestrator | Registering Redlock._release_script 2025-06-05 19:27:57.634157 | orchestrator | 2025-06-05 19:27:57 | INFO  | Task c8ef30d7-b661-4c50-b0d6-b649437dc7e3 (reboot) was prepared for execution. 2025-06-05 19:27:57.634255 | orchestrator | 2025-06-05 19:27:57 | INFO  | It takes a moment until task c8ef30d7-b661-4c50-b0d6-b649437dc7e3 (reboot) has been started and output is visible here. 2025-06-05 19:28:01.755112 | orchestrator | 2025-06-05 19:28:01.755341 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:01.756294 | orchestrator | 2025-06-05 19:28:01.759487 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:01.760335 | orchestrator | Thursday 05 June 2025 19:28:01 +0000 (0:00:00.207) 0:00:00.207 ********* 2025-06-05 19:28:01.861608 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:28:01.862355 | orchestrator | 2025-06-05 19:28:01.863490 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:01.864335 | orchestrator | Thursday 05 June 2025 19:28:01 +0000 (0:00:00.109) 0:00:00.317 ********* 2025-06-05 19:28:02.800249 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:28:02.800497 | orchestrator | 2025-06-05 19:28:02.801991 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:02.803620 | orchestrator | Thursday 05 June 2025 19:28:02 +0000 (0:00:00.937) 0:00:01.255 ********* 2025-06-05 19:28:02.920684 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:28:02.921097 | orchestrator | 2025-06-05 19:28:02.923486 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:02.923514 | orchestrator | 2025-06-05 19:28:02.924211 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:02.925113 | orchestrator | Thursday 05 June 2025 19:28:02 +0000 (0:00:00.119) 0:00:01.374 ********* 2025-06-05 19:28:03.034232 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:28:03.034523 | orchestrator | 2025-06-05 19:28:03.035231 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:03.035590 | orchestrator | Thursday 05 June 2025 19:28:03 +0000 (0:00:00.111) 0:00:01.486 ********* 2025-06-05 19:28:03.687600 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:28:03.688468 | orchestrator | 2025-06-05 19:28:03.689343 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:03.690110 | orchestrator | Thursday 05 June 2025 19:28:03 +0000 (0:00:00.657) 0:00:02.143 ********* 2025-06-05 19:28:03.799654 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:28:03.799878 | orchestrator | 2025-06-05 19:28:03.801341 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:03.802879 | orchestrator | 2025-06-05 19:28:03.803863 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:03.806126 | orchestrator | Thursday 05 June 2025 19:28:03 +0000 (0:00:00.109) 0:00:02.253 ********* 2025-06-05 19:28:04.011675 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:28:04.012412 | orchestrator | 2025-06-05 19:28:04.013230 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:04.015364 | orchestrator | Thursday 05 June 2025 19:28:04 +0000 (0:00:00.213) 0:00:02.466 ********* 2025-06-05 19:28:04.683520 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:28:04.684008 | orchestrator | 2025-06-05 19:28:04.684667 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:04.685057 | orchestrator | Thursday 05 June 2025 19:28:04 +0000 (0:00:00.672) 0:00:03.138 ********* 2025-06-05 19:28:04.819393 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:28:04.820035 | orchestrator | 2025-06-05 19:28:04.821017 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:04.821622 | orchestrator | 2025-06-05 19:28:04.822859 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:04.824232 | orchestrator | Thursday 05 June 2025 19:28:04 +0000 (0:00:00.134) 0:00:03.273 ********* 2025-06-05 19:28:04.918474 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:28:04.919106 | orchestrator | 2025-06-05 19:28:04.919787 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:04.920236 | orchestrator | Thursday 05 June 2025 19:28:04 +0000 (0:00:00.101) 0:00:03.374 ********* 2025-06-05 19:28:05.577420 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:28:05.577637 | orchestrator | 2025-06-05 19:28:05.578137 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:05.579081 | orchestrator | Thursday 05 June 2025 19:28:05 +0000 (0:00:00.657) 0:00:04.032 ********* 2025-06-05 19:28:05.686223 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:28:05.687759 | orchestrator | 2025-06-05 19:28:05.688791 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:05.689494 | orchestrator | 2025-06-05 19:28:05.691203 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:05.692016 | orchestrator | Thursday 05 June 2025 19:28:05 +0000 (0:00:00.105) 0:00:04.138 ********* 2025-06-05 19:28:05.794301 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:28:05.794811 | orchestrator | 2025-06-05 19:28:05.796346 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:05.796372 | orchestrator | Thursday 05 June 2025 19:28:05 +0000 (0:00:00.110) 0:00:04.248 ********* 2025-06-05 19:28:06.471446 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:28:06.472382 | orchestrator | 2025-06-05 19:28:06.473052 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:06.473952 | orchestrator | Thursday 05 June 2025 19:28:06 +0000 (0:00:00.677) 0:00:04.926 ********* 2025-06-05 19:28:06.591035 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:28:06.592417 | orchestrator | 2025-06-05 19:28:06.592852 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-05 19:28:06.594394 | orchestrator | 2025-06-05 19:28:06.594427 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-05 19:28:06.594941 | orchestrator | Thursday 05 June 2025 19:28:06 +0000 (0:00:00.120) 0:00:05.046 ********* 2025-06-05 19:28:06.697683 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:28:06.698717 | orchestrator | 2025-06-05 19:28:06.699304 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-05 19:28:06.701213 | orchestrator | Thursday 05 June 2025 19:28:06 +0000 (0:00:00.107) 0:00:05.153 ********* 2025-06-05 19:28:07.400838 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:28:07.402402 | orchestrator | 2025-06-05 19:28:07.402436 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-05 19:28:07.403092 | orchestrator | Thursday 05 June 2025 19:28:07 +0000 (0:00:00.701) 0:00:05.855 ********* 2025-06-05 19:28:07.440525 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:28:07.440605 | orchestrator | 2025-06-05 19:28:07.441215 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:28:07.442448 | orchestrator | 2025-06-05 19:28:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:28:07.442478 | orchestrator | 2025-06-05 19:28:07 | INFO  | Please wait and do not abort execution. 2025-06-05 19:28:07.443731 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.445386 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.446060 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.447327 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.447958 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.448661 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:28:07.449036 | orchestrator | 2025-06-05 19:28:07.449596 | orchestrator | 2025-06-05 19:28:07.450133 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:28:07.450853 | orchestrator | Thursday 05 June 2025 19:28:07 +0000 (0:00:00.040) 0:00:05.895 ********* 2025-06-05 19:28:07.451305 | orchestrator | =============================================================================== 2025-06-05 19:28:07.451997 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2025-06-05 19:28:07.452400 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-06-05 19:28:07.453437 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.63s 2025-06-05 19:28:08.091059 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-05 19:28:09.808941 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:28:09.809865 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:28:09.809927 | orchestrator | Registering Redlock._release_script 2025-06-05 19:28:09.869825 | orchestrator | 2025-06-05 19:28:09 | INFO  | Task f287afd2-d253-4c66-acbf-f79507a396b4 (wait-for-connection) was prepared for execution. 2025-06-05 19:28:09.869929 | orchestrator | 2025-06-05 19:28:09 | INFO  | It takes a moment until task f287afd2-d253-4c66-acbf-f79507a396b4 (wait-for-connection) has been started and output is visible here. 2025-06-05 19:28:13.894831 | orchestrator | 2025-06-05 19:28:13.895047 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-05 19:28:13.895780 | orchestrator | 2025-06-05 19:28:13.899531 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-05 19:28:13.900080 | orchestrator | Thursday 05 June 2025 19:28:13 +0000 (0:00:00.235) 0:00:00.235 ********* 2025-06-05 19:28:26.523683 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:28:26.523827 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:28:26.523842 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:28:26.523854 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:28:26.523865 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:28:26.523876 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:28:26.524075 | orchestrator | 2025-06-05 19:28:26.524956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:28:26.525575 | orchestrator | 2025-06-05 19:28:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:28:26.525807 | orchestrator | 2025-06-05 19:28:26 | INFO  | Please wait and do not abort execution. 2025-06-05 19:28:26.528265 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.528315 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.528327 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.529252 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.529790 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.530232 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:26.530738 | orchestrator | 2025-06-05 19:28:26.531236 | orchestrator | 2025-06-05 19:28:26.531669 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:28:26.532134 | orchestrator | Thursday 05 June 2025 19:28:26 +0000 (0:00:12.626) 0:00:12.861 ********* 2025-06-05 19:28:26.533072 | orchestrator | =============================================================================== 2025-06-05 19:28:26.533929 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.63s 2025-06-05 19:28:27.174230 | orchestrator | + osism apply hddtemp 2025-06-05 19:28:28.915970 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:28:28.916072 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:28:28.916088 | orchestrator | Registering Redlock._release_script 2025-06-05 19:28:28.973733 | orchestrator | 2025-06-05 19:28:28 | INFO  | Task 83aa29f8-b5fd-4dfd-91d1-b334ec982ff9 (hddtemp) was prepared for execution. 2025-06-05 19:28:28.973827 | orchestrator | 2025-06-05 19:28:28 | INFO  | It takes a moment until task 83aa29f8-b5fd-4dfd-91d1-b334ec982ff9 (hddtemp) has been started and output is visible here. 2025-06-05 19:28:33.005691 | orchestrator | 2025-06-05 19:28:33.007040 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-05 19:28:33.008683 | orchestrator | 2025-06-05 19:28:33.009442 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-05 19:28:33.009977 | orchestrator | Thursday 05 June 2025 19:28:32 +0000 (0:00:00.248) 0:00:00.248 ********* 2025-06-05 19:28:33.125800 | orchestrator | ok: [testbed-manager] 2025-06-05 19:28:33.179788 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:28:33.238114 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:28:33.294530 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:28:33.426628 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:28:33.524577 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:28:33.524760 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:28:33.525734 | orchestrator | 2025-06-05 19:28:33.526089 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-05 19:28:33.527282 | orchestrator | Thursday 05 June 2025 19:28:33 +0000 (0:00:00.515) 0:00:00.763 ********* 2025-06-05 19:28:34.403633 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:28:34.403995 | orchestrator | 2025-06-05 19:28:34.404027 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-05 19:28:34.406854 | orchestrator | Thursday 05 June 2025 19:28:34 +0000 (0:00:00.883) 0:00:01.647 ********* 2025-06-05 19:28:36.340826 | orchestrator | ok: [testbed-manager] 2025-06-05 19:28:36.342140 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:28:36.342315 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:28:36.346121 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:28:36.346200 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:28:36.346215 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:28:36.346226 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:28:36.346401 | orchestrator | 2025-06-05 19:28:36.347218 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-05 19:28:36.347784 | orchestrator | Thursday 05 June 2025 19:28:36 +0000 (0:00:01.937) 0:00:03.584 ********* 2025-06-05 19:28:36.866238 | orchestrator | changed: [testbed-manager] 2025-06-05 19:28:36.943847 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:28:37.391160 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:28:37.391486 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:28:37.392563 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:28:37.395372 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:28:37.396739 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:28:37.396762 | orchestrator | 2025-06-05 19:28:37.396775 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-05 19:28:37.397424 | orchestrator | Thursday 05 June 2025 19:28:37 +0000 (0:00:01.048) 0:00:04.633 ********* 2025-06-05 19:28:39.189040 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:28:39.189796 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:28:39.190604 | orchestrator | ok: [testbed-manager] 2025-06-05 19:28:39.191961 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:28:39.196044 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:28:39.196838 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:28:39.197469 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:28:39.198181 | orchestrator | 2025-06-05 19:28:39.198810 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-05 19:28:39.199504 | orchestrator | Thursday 05 June 2025 19:28:39 +0000 (0:00:01.800) 0:00:06.433 ********* 2025-06-05 19:28:39.664620 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:28:39.747063 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:28:39.827669 | orchestrator | changed: [testbed-manager] 2025-06-05 19:28:39.917040 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:28:40.032273 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:28:40.033511 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:28:40.033938 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:28:40.034750 | orchestrator | 2025-06-05 19:28:40.035551 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-05 19:28:40.036383 | orchestrator | Thursday 05 June 2025 19:28:40 +0000 (0:00:00.839) 0:00:07.273 ********* 2025-06-05 19:28:52.474347 | orchestrator | changed: [testbed-manager] 2025-06-05 19:28:52.474469 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:28:52.474485 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:28:52.474497 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:28:52.474733 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:28:52.474758 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:28:52.475287 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:28:52.475783 | orchestrator | 2025-06-05 19:28:52.476413 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-05 19:28:52.476975 | orchestrator | Thursday 05 June 2025 19:28:52 +0000 (0:00:12.438) 0:00:19.712 ********* 2025-06-05 19:28:53.895150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:28:53.896949 | orchestrator | 2025-06-05 19:28:53.898168 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-05 19:28:53.899369 | orchestrator | Thursday 05 June 2025 19:28:53 +0000 (0:00:01.422) 0:00:21.134 ********* 2025-06-05 19:28:55.757399 | orchestrator | changed: [testbed-manager] 2025-06-05 19:28:55.758282 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:28:55.759281 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:28:55.760045 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:28:55.761915 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:28:55.762782 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:28:55.763547 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:28:55.764323 | orchestrator | 2025-06-05 19:28:55.766212 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:28:55.766265 | orchestrator | 2025-06-05 19:28:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:28:55.766280 | orchestrator | 2025-06-05 19:28:55 | INFO  | Please wait and do not abort execution. 2025-06-05 19:28:55.766697 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:28:55.767470 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.768464 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.769032 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.769943 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.770382 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.771085 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:28:55.771771 | orchestrator | 2025-06-05 19:28:55.772544 | orchestrator | 2025-06-05 19:28:55.773207 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:28:55.773922 | orchestrator | Thursday 05 June 2025 19:28:55 +0000 (0:00:01.866) 0:00:23.001 ********* 2025-06-05 19:28:55.774730 | orchestrator | =============================================================================== 2025-06-05 19:28:55.775397 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.44s 2025-06-05 19:28:55.776059 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.94s 2025-06-05 19:28:55.778276 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.87s 2025-06-05 19:28:55.778304 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.80s 2025-06-05 19:28:55.778316 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-06-05 19:28:55.778762 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.05s 2025-06-05 19:28:55.779299 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.88s 2025-06-05 19:28:55.779995 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.84s 2025-06-05 19:28:55.780771 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.52s 2025-06-05 19:28:56.456074 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-05 19:28:56.498574 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-05 19:28:56.498659 | orchestrator | + sudo systemctl restart manager.service 2025-06-05 19:29:09.978469 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-05 19:29:09.978573 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-05 19:29:09.978587 | orchestrator | + local max_attempts=60 2025-06-05 19:29:09.978600 | orchestrator | + local name=ceph-ansible 2025-06-05 19:29:09.978611 | orchestrator | + local attempt_num=1 2025-06-05 19:29:09.978684 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:10.015541 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:10.015587 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:10.015599 | orchestrator | + sleep 5 2025-06-05 19:29:15.018377 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:15.054814 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:15.054943 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:15.054959 | orchestrator | + sleep 5 2025-06-05 19:29:20.057726 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:20.093962 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:20.094072 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:20.094086 | orchestrator | + sleep 5 2025-06-05 19:29:25.099692 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:25.140610 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:25.140700 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:25.140714 | orchestrator | + sleep 5 2025-06-05 19:29:30.146315 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:30.187194 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:30.187278 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:30.187291 | orchestrator | + sleep 5 2025-06-05 19:29:35.192130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:35.228812 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:35.228934 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:35.228946 | orchestrator | + sleep 5 2025-06-05 19:29:40.232994 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:40.272150 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:40.272231 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:40.272241 | orchestrator | + sleep 5 2025-06-05 19:29:45.277107 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:45.323117 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:45.323214 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:45.323229 | orchestrator | + sleep 5 2025-06-05 19:29:50.324647 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:50.351726 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:50.351803 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:50.351842 | orchestrator | + sleep 5 2025-06-05 19:29:55.356450 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:29:55.392853 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:29:55.392902 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:29:55.392907 | orchestrator | + sleep 5 2025-06-05 19:30:00.397784 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:30:00.428531 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:00.428631 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:30:00.428647 | orchestrator | + sleep 5 2025-06-05 19:30:05.432940 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:30:05.468448 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:05.468531 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:30:05.468544 | orchestrator | + sleep 5 2025-06-05 19:30:10.474591 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:30:10.512943 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:10.513031 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-05 19:30:10.513047 | orchestrator | + sleep 5 2025-06-05 19:30:15.519255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-05 19:30:15.555539 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:15.555624 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-05 19:30:15.555706 | orchestrator | + local max_attempts=60 2025-06-05 19:30:15.555722 | orchestrator | + local name=kolla-ansible 2025-06-05 19:30:15.555734 | orchestrator | + local attempt_num=1 2025-06-05 19:30:15.555844 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-05 19:30:15.586382 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:15.586439 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-05 19:30:15.586451 | orchestrator | + local max_attempts=60 2025-06-05 19:30:15.586463 | orchestrator | + local name=osism-ansible 2025-06-05 19:30:15.586475 | orchestrator | + local attempt_num=1 2025-06-05 19:30:15.587384 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-05 19:30:15.615623 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-05 19:30:15.615662 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-05 19:30:15.615674 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-05 19:30:15.765035 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-05 19:30:15.918129 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-05 19:30:16.075521 | orchestrator | ARA in osism-ansible already disabled. 2025-06-05 19:30:16.224049 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-05 19:30:16.224901 | orchestrator | + osism apply gather-facts 2025-06-05 19:30:17.945307 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:30:17.945425 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:30:17.945441 | orchestrator | Registering Redlock._release_script 2025-06-05 19:30:18.007490 | orchestrator | 2025-06-05 19:30:18 | INFO  | Task 4bac2888-13c7-46cf-8783-93acecb009cc (gather-facts) was prepared for execution. 2025-06-05 19:30:18.007604 | orchestrator | 2025-06-05 19:30:18 | INFO  | It takes a moment until task 4bac2888-13c7-46cf-8783-93acecb009cc (gather-facts) has been started and output is visible here. 2025-06-05 19:30:22.011234 | orchestrator | 2025-06-05 19:30:22.011974 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 19:30:22.012618 | orchestrator | 2025-06-05 19:30:22.012656 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:30:22.013465 | orchestrator | Thursday 05 June 2025 19:30:21 +0000 (0:00:00.218) 0:00:00.218 ********* 2025-06-05 19:30:27.501246 | orchestrator | ok: [testbed-manager] 2025-06-05 19:30:27.501359 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:30:27.503527 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:30:27.503571 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:30:27.504929 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:30:27.505898 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:30:27.507091 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:30:27.507865 | orchestrator | 2025-06-05 19:30:27.508894 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-05 19:30:27.509269 | orchestrator | 2025-06-05 19:30:27.510309 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-05 19:30:27.510783 | orchestrator | Thursday 05 June 2025 19:30:27 +0000 (0:00:05.493) 0:00:05.712 ********* 2025-06-05 19:30:27.669058 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:30:27.747657 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:30:27.822943 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:30:27.900189 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:30:27.974779 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:30:28.019693 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:30:28.021381 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:30:28.022666 | orchestrator | 2025-06-05 19:30:28.024087 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:30:28.024373 | orchestrator | 2025-06-05 19:30:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:30:28.024518 | orchestrator | 2025-06-05 19:30:28 | INFO  | Please wait and do not abort execution. 2025-06-05 19:30:28.026796 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.027653 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.028857 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.029622 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.030741 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.031164 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.031387 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:30:28.031725 | orchestrator | 2025-06-05 19:30:28.032299 | orchestrator | 2025-06-05 19:30:28.032679 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:30:28.033071 | orchestrator | Thursday 05 June 2025 19:30:28 +0000 (0:00:00.520) 0:00:06.233 ********* 2025-06-05 19:30:28.033544 | orchestrator | =============================================================================== 2025-06-05 19:30:28.033921 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.49s 2025-06-05 19:30:28.034249 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-05 19:30:28.634310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-05 19:30:28.651146 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-05 19:30:28.662610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-05 19:30:28.676738 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-05 19:30:28.689108 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-05 19:30:28.701610 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-05 19:30:28.719466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-05 19:30:28.736438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-05 19:30:28.754064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-05 19:30:28.767615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-05 19:30:28.780194 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-05 19:30:28.792749 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-05 19:30:28.814211 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-05 19:30:28.831615 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-05 19:30:28.850537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-05 19:30:28.871889 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-05 19:30:28.889535 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-05 19:30:28.908062 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-05 19:30:28.927894 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-05 19:30:28.945319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-05 19:30:28.964323 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-05 19:30:29.235003 | orchestrator | ok: Runtime: 0:19:50.444250 2025-06-05 19:30:29.341686 | 2025-06-05 19:30:29.341828 | TASK [Deploy services] 2025-06-05 19:30:29.874559 | orchestrator | skipping: Conditional result was False 2025-06-05 19:30:29.892323 | 2025-06-05 19:30:29.892480 | TASK [Deploy in a nutshell] 2025-06-05 19:30:30.622686 | orchestrator | + set -e 2025-06-05 19:30:30.622950 | orchestrator | 2025-06-05 19:30:30.622977 | orchestrator | # PULL IMAGES 2025-06-05 19:30:30.622992 | orchestrator | 2025-06-05 19:30:30.623011 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 19:30:30.623032 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 19:30:30.623047 | orchestrator | ++ INTERACTIVE=false 2025-06-05 19:30:30.623106 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 19:30:30.623142 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 19:30:30.623158 | orchestrator | + source /opt/manager-vars.sh 2025-06-05 19:30:30.623170 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-05 19:30:30.623189 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-05 19:30:30.623201 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-05 19:30:30.623218 | orchestrator | ++ CEPH_VERSION=reef 2025-06-05 19:30:30.623230 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-05 19:30:30.623248 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-05 19:30:30.623260 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 19:30:30.623275 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 19:30:30.623286 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-05 19:30:30.623299 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-05 19:30:30.623310 | orchestrator | ++ export ARA=false 2025-06-05 19:30:30.623321 | orchestrator | ++ ARA=false 2025-06-05 19:30:30.623333 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-05 19:30:30.623344 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-05 19:30:30.623355 | orchestrator | ++ export TEMPEST=false 2025-06-05 19:30:30.623366 | orchestrator | ++ TEMPEST=false 2025-06-05 19:30:30.623377 | orchestrator | ++ export IS_ZUUL=true 2025-06-05 19:30:30.623389 | orchestrator | ++ IS_ZUUL=true 2025-06-05 19:30:30.623400 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:30:30.623411 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 19:30:30.623423 | orchestrator | ++ export EXTERNAL_API=false 2025-06-05 19:30:30.623434 | orchestrator | ++ EXTERNAL_API=false 2025-06-05 19:30:30.623445 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-05 19:30:30.623457 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-05 19:30:30.623468 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-05 19:30:30.623479 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-05 19:30:30.623490 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-05 19:30:30.623509 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-05 19:30:30.623521 | orchestrator | + echo 2025-06-05 19:30:30.623532 | orchestrator | + echo '# PULL IMAGES' 2025-06-05 19:30:30.623544 | orchestrator | + echo 2025-06-05 19:30:30.623555 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-05 19:30:30.680190 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-05 19:30:30.680238 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-05 19:30:32.361862 | orchestrator | 2025-06-05 19:30:32 | INFO  | Trying to run play pull-images in environment custom 2025-06-05 19:30:32.366838 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:30:32.366915 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:30:32.366923 | orchestrator | Registering Redlock._release_script 2025-06-05 19:30:32.429556 | orchestrator | 2025-06-05 19:30:32 | INFO  | Task d8f6116e-4d22-4bad-a70d-894a11136686 (pull-images) was prepared for execution. 2025-06-05 19:30:32.429612 | orchestrator | 2025-06-05 19:30:32 | INFO  | It takes a moment until task d8f6116e-4d22-4bad-a70d-894a11136686 (pull-images) has been started and output is visible here. 2025-06-05 19:30:36.252620 | orchestrator | 2025-06-05 19:30:36.253763 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-05 19:30:36.255032 | orchestrator | 2025-06-05 19:30:36.258869 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-05 19:30:36.258879 | orchestrator | Thursday 05 June 2025 19:30:36 +0000 (0:00:00.114) 0:00:00.114 ********* 2025-06-05 19:31:40.599709 | orchestrator | changed: [testbed-manager] 2025-06-05 19:31:40.599972 | orchestrator | 2025-06-05 19:31:40.599997 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-05 19:31:40.600011 | orchestrator | Thursday 05 June 2025 19:31:40 +0000 (0:01:04.352) 0:01:04.467 ********* 2025-06-05 19:32:33.023339 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-05 19:32:33.024463 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-05 19:32:33.024670 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-05 19:32:33.026590 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-05 19:32:33.027904 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-05 19:32:33.029030 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-05 19:32:33.029782 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-05 19:32:33.030630 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-05 19:32:33.030948 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-05 19:32:33.031483 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-05 19:32:33.032095 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-05 19:32:33.032535 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-05 19:32:33.033097 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-05 19:32:33.033854 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-05 19:32:33.034682 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-05 19:32:33.035488 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-05 19:32:33.036222 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-05 19:32:33.036931 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-05 19:32:33.037575 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-05 19:32:33.038143 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-05 19:32:33.038730 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-05 19:32:33.039453 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-05 19:32:33.040201 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-05 19:32:33.040833 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-05 19:32:33.041455 | orchestrator | 2025-06-05 19:32:33.042138 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:32:33.042530 | orchestrator | 2025-06-05 19:32:33 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:32:33.042914 | orchestrator | 2025-06-05 19:32:33 | INFO  | Please wait and do not abort execution. 2025-06-05 19:32:33.043690 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:32:33.044370 | orchestrator | 2025-06-05 19:32:33.044954 | orchestrator | 2025-06-05 19:32:33.045393 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:32:33.045928 | orchestrator | Thursday 05 June 2025 19:32:33 +0000 (0:00:52.424) 0:01:56.891 ********* 2025-06-05 19:32:33.046557 | orchestrator | =============================================================================== 2025-06-05 19:32:33.047522 | orchestrator | Pull keystone image ---------------------------------------------------- 64.35s 2025-06-05 19:32:33.048226 | orchestrator | Pull other images ------------------------------------------------------ 52.42s 2025-06-05 19:32:34.933280 | orchestrator | 2025-06-05 19:32:34 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-05 19:32:34.937191 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:32:34.937220 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:32:34.937226 | orchestrator | Registering Redlock._release_script 2025-06-05 19:32:34.988137 | orchestrator | 2025-06-05 19:32:34 | INFO  | Task c76eb8d9-d472-417d-9b2a-2a39d05c692e (wipe-partitions) was prepared for execution. 2025-06-05 19:32:34.988223 | orchestrator | 2025-06-05 19:32:34 | INFO  | It takes a moment until task c76eb8d9-d472-417d-9b2a-2a39d05c692e (wipe-partitions) has been started and output is visible here. 2025-06-05 19:32:38.645432 | orchestrator | 2025-06-05 19:32:38.645574 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-05 19:32:38.646435 | orchestrator | 2025-06-05 19:32:38.648331 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-05 19:32:38.648913 | orchestrator | Thursday 05 June 2025 19:32:38 +0000 (0:00:00.129) 0:00:00.129 ********* 2025-06-05 19:32:39.226885 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:32:39.227041 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:32:39.227498 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:32:39.228267 | orchestrator | 2025-06-05 19:32:39.231599 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-05 19:32:39.231659 | orchestrator | Thursday 05 June 2025 19:32:39 +0000 (0:00:00.583) 0:00:00.713 ********* 2025-06-05 19:32:39.372592 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:32:39.458902 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:32:39.458997 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:32:39.459010 | orchestrator | 2025-06-05 19:32:39.459563 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-05 19:32:39.463209 | orchestrator | Thursday 05 June 2025 19:32:39 +0000 (0:00:00.228) 0:00:00.941 ********* 2025-06-05 19:32:40.212309 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:32:40.212723 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:32:40.214729 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:32:40.214756 | orchestrator | 2025-06-05 19:32:40.215174 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-05 19:32:40.215532 | orchestrator | Thursday 05 June 2025 19:32:40 +0000 (0:00:00.757) 0:00:01.699 ********* 2025-06-05 19:32:40.367784 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:32:40.503368 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:32:40.503732 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:32:40.504289 | orchestrator | 2025-06-05 19:32:40.504673 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-05 19:32:40.506156 | orchestrator | Thursday 05 June 2025 19:32:40 +0000 (0:00:00.291) 0:00:01.990 ********* 2025-06-05 19:32:41.690347 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-05 19:32:41.690776 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-05 19:32:41.693221 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-05 19:32:41.693250 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-05 19:32:41.693263 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-05 19:32:41.693275 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-05 19:32:41.694000 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-05 19:32:41.695262 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-05 19:32:41.699165 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-05 19:32:41.699966 | orchestrator | 2025-06-05 19:32:41.700395 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-05 19:32:41.701089 | orchestrator | Thursday 05 June 2025 19:32:41 +0000 (0:00:01.183) 0:00:03.174 ********* 2025-06-05 19:32:43.133093 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-05 19:32:43.133340 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-05 19:32:43.133564 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-05 19:32:43.133588 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-05 19:32:43.133883 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-05 19:32:43.137273 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-05 19:32:43.139083 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-05 19:32:43.139113 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-05 19:32:43.139125 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-05 19:32:43.139136 | orchestrator | 2025-06-05 19:32:43.139328 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-05 19:32:43.139586 | orchestrator | Thursday 05 June 2025 19:32:43 +0000 (0:00:01.434) 0:00:04.609 ********* 2025-06-05 19:32:45.405355 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-05 19:32:45.405466 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-05 19:32:45.405473 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-05 19:32:45.405526 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-05 19:32:45.408164 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-05 19:32:45.408326 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-05 19:32:45.408598 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-05 19:32:45.408985 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-05 19:32:45.409293 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-05 19:32:45.409583 | orchestrator | 2025-06-05 19:32:45.409960 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-05 19:32:45.410239 | orchestrator | Thursday 05 June 2025 19:32:45 +0000 (0:00:02.281) 0:00:06.890 ********* 2025-06-05 19:32:45.973987 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:32:45.974088 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:32:45.974492 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:32:45.975982 | orchestrator | 2025-06-05 19:32:45.976286 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-05 19:32:45.976613 | orchestrator | Thursday 05 June 2025 19:32:45 +0000 (0:00:00.570) 0:00:07.461 ********* 2025-06-05 19:32:46.529119 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:32:46.529512 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:32:46.530033 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:32:46.530652 | orchestrator | 2025-06-05 19:32:46.531150 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:32:46.531469 | orchestrator | 2025-06-05 19:32:46 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:32:46.532069 | orchestrator | 2025-06-05 19:32:46 | INFO  | Please wait and do not abort execution. 2025-06-05 19:32:46.532473 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:32:46.532935 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:32:46.535381 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:32:46.536152 | orchestrator | 2025-06-05 19:32:46.536492 | orchestrator | 2025-06-05 19:32:46.537662 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:32:46.537816 | orchestrator | Thursday 05 June 2025 19:32:46 +0000 (0:00:00.554) 0:00:08.016 ********* 2025-06-05 19:32:46.538265 | orchestrator | =============================================================================== 2025-06-05 19:32:46.538598 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.28s 2025-06-05 19:32:46.539295 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.43s 2025-06-05 19:32:46.539394 | orchestrator | Check device availability ----------------------------------------------- 1.18s 2025-06-05 19:32:46.539674 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.76s 2025-06-05 19:32:46.540040 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-06-05 19:32:46.540503 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-06-05 19:32:46.540714 | orchestrator | Request device events from the kernel ----------------------------------- 0.56s 2025-06-05 19:32:46.540963 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.29s 2025-06-05 19:32:46.541313 | orchestrator | Remove all rook related logical devices --------------------------------- 0.23s 2025-06-05 19:32:48.395336 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:32:48.395473 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:32:48.395481 | orchestrator | Registering Redlock._release_script 2025-06-05 19:32:48.444898 | orchestrator | 2025-06-05 19:32:48 | INFO  | Task 5ed65286-0034-4ea5-a605-dc5ccf6eabb0 (facts) was prepared for execution. 2025-06-05 19:32:48.445520 | orchestrator | 2025-06-05 19:32:48 | INFO  | It takes a moment until task 5ed65286-0034-4ea5-a605-dc5ccf6eabb0 (facts) has been started and output is visible here. 2025-06-05 19:32:52.467490 | orchestrator | 2025-06-05 19:32:52.467576 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-05 19:32:52.467585 | orchestrator | 2025-06-05 19:32:52.469071 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-05 19:32:52.469083 | orchestrator | Thursday 05 June 2025 19:32:52 +0000 (0:00:00.201) 0:00:00.201 ********* 2025-06-05 19:32:53.389590 | orchestrator | ok: [testbed-manager] 2025-06-05 19:32:53.395469 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:32:53.395488 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:32:53.396587 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:32:53.397918 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:32:53.399650 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:32:53.399659 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:32:53.400008 | orchestrator | 2025-06-05 19:32:53.400943 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-05 19:32:53.401294 | orchestrator | Thursday 05 June 2025 19:32:53 +0000 (0:00:00.920) 0:00:01.122 ********* 2025-06-05 19:32:53.538629 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:32:53.603844 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:32:53.667351 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:32:53.722972 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:32:53.781165 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:32:54.322115 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:32:54.323491 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:32:54.326432 | orchestrator | 2025-06-05 19:32:54.326815 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 19:32:54.327950 | orchestrator | 2025-06-05 19:32:54.328264 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:32:54.333160 | orchestrator | Thursday 05 June 2025 19:32:54 +0000 (0:00:00.934) 0:00:02.057 ********* 2025-06-05 19:33:00.074285 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:33:00.074395 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:33:00.075961 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:33:00.076366 | orchestrator | ok: [testbed-manager] 2025-06-05 19:33:00.078154 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:33:00.078406 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:33:00.078935 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:33:00.079318 | orchestrator | 2025-06-05 19:33:00.080540 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-05 19:33:00.083746 | orchestrator | 2025-06-05 19:33:00.083811 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-05 19:33:00.084414 | orchestrator | Thursday 05 June 2025 19:33:00 +0000 (0:00:05.748) 0:00:07.805 ********* 2025-06-05 19:33:00.235308 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:33:00.317734 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:33:00.389922 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:33:00.473128 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:33:00.551204 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:00.591353 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:00.591496 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:00.591605 | orchestrator | 2025-06-05 19:33:00.594217 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:33:00.595031 | orchestrator | 2025-06-05 19:33:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:33:00.595065 | orchestrator | 2025-06-05 19:33:00 | INFO  | Please wait and do not abort execution. 2025-06-05 19:33:00.598568 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.598875 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.600001 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.605311 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.605876 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.606478 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.607021 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:33:00.607783 | orchestrator | 2025-06-05 19:33:00.608264 | orchestrator | 2025-06-05 19:33:00.608843 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:33:00.609416 | orchestrator | Thursday 05 June 2025 19:33:00 +0000 (0:00:00.522) 0:00:08.328 ********* 2025-06-05 19:33:00.612027 | orchestrator | =============================================================================== 2025-06-05 19:33:00.612061 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.75s 2025-06-05 19:33:00.612073 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 0.93s 2025-06-05 19:33:00.612084 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.92s 2025-06-05 19:33:00.612095 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-05 19:33:03.191477 | orchestrator | 2025-06-05 19:33:03 | INFO  | Task 592b3434-9c8f-44a9-b45b-8dd9c544e507 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-05 19:33:03.191580 | orchestrator | 2025-06-05 19:33:03 | INFO  | It takes a moment until task 592b3434-9c8f-44a9-b45b-8dd9c544e507 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-05 19:33:07.458292 | orchestrator | 2025-06-05 19:33:07.459532 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-05 19:33:07.464103 | orchestrator | 2025-06-05 19:33:07.465623 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:33:07.467019 | orchestrator | Thursday 05 June 2025 19:33:07 +0000 (0:00:00.308) 0:00:00.308 ********* 2025-06-05 19:33:07.694834 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:07.696771 | orchestrator | 2025-06-05 19:33:07.697834 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:33:07.699050 | orchestrator | Thursday 05 June 2025 19:33:07 +0000 (0:00:00.237) 0:00:00.545 ********* 2025-06-05 19:33:07.885752 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:33:07.885836 | orchestrator | 2025-06-05 19:33:07.886419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:07.886749 | orchestrator | Thursday 05 June 2025 19:33:07 +0000 (0:00:00.189) 0:00:00.735 ********* 2025-06-05 19:33:08.223013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-05 19:33:08.224140 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-05 19:33:08.225234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-05 19:33:08.226544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-05 19:33:08.226764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-05 19:33:08.227511 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-05 19:33:08.230000 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-05 19:33:08.230440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-05 19:33:08.231002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-05 19:33:08.231300 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-05 19:33:08.231986 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-05 19:33:08.232797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-05 19:33:08.234398 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-05 19:33:08.234804 | orchestrator | 2025-06-05 19:33:08.235871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:08.235984 | orchestrator | Thursday 05 June 2025 19:33:08 +0000 (0:00:00.340) 0:00:01.075 ********* 2025-06-05 19:33:08.595279 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:08.595419 | orchestrator | 2025-06-05 19:33:08.595438 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:08.595517 | orchestrator | Thursday 05 June 2025 19:33:08 +0000 (0:00:00.370) 0:00:01.445 ********* 2025-06-05 19:33:08.765362 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:08.765830 | orchestrator | 2025-06-05 19:33:08.766758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:08.767417 | orchestrator | Thursday 05 June 2025 19:33:08 +0000 (0:00:00.172) 0:00:01.618 ********* 2025-06-05 19:33:08.959620 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:08.960146 | orchestrator | 2025-06-05 19:33:08.961036 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:08.961062 | orchestrator | Thursday 05 June 2025 19:33:08 +0000 (0:00:00.191) 0:00:01.809 ********* 2025-06-05 19:33:09.121269 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:09.122463 | orchestrator | 2025-06-05 19:33:09.122832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:09.123955 | orchestrator | Thursday 05 June 2025 19:33:09 +0000 (0:00:00.161) 0:00:01.970 ********* 2025-06-05 19:33:09.304312 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:09.304622 | orchestrator | 2025-06-05 19:33:09.305706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:09.307422 | orchestrator | Thursday 05 June 2025 19:33:09 +0000 (0:00:00.182) 0:00:02.153 ********* 2025-06-05 19:33:09.455484 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:09.456311 | orchestrator | 2025-06-05 19:33:09.457506 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:09.458172 | orchestrator | Thursday 05 June 2025 19:33:09 +0000 (0:00:00.151) 0:00:02.305 ********* 2025-06-05 19:33:09.625214 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:09.625462 | orchestrator | 2025-06-05 19:33:09.626103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:09.629516 | orchestrator | Thursday 05 June 2025 19:33:09 +0000 (0:00:00.167) 0:00:02.473 ********* 2025-06-05 19:33:09.786717 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:09.787401 | orchestrator | 2025-06-05 19:33:09.788005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:09.791426 | orchestrator | Thursday 05 June 2025 19:33:09 +0000 (0:00:00.164) 0:00:02.637 ********* 2025-06-05 19:33:10.126561 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30) 2025-06-05 19:33:10.127201 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30) 2025-06-05 19:33:10.127718 | orchestrator | 2025-06-05 19:33:10.127938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:10.128554 | orchestrator | Thursday 05 June 2025 19:33:10 +0000 (0:00:00.340) 0:00:02.978 ********* 2025-06-05 19:33:10.502496 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312) 2025-06-05 19:33:10.503212 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312) 2025-06-05 19:33:10.503923 | orchestrator | 2025-06-05 19:33:10.504531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:10.505861 | orchestrator | Thursday 05 June 2025 19:33:10 +0000 (0:00:00.373) 0:00:03.352 ********* 2025-06-05 19:33:11.031791 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441) 2025-06-05 19:33:11.031998 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441) 2025-06-05 19:33:11.032483 | orchestrator | 2025-06-05 19:33:11.033116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:11.033760 | orchestrator | Thursday 05 June 2025 19:33:11 +0000 (0:00:00.530) 0:00:03.882 ********* 2025-06-05 19:33:11.535033 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766) 2025-06-05 19:33:11.537885 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766) 2025-06-05 19:33:11.537914 | orchestrator | 2025-06-05 19:33:11.537928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:11.537940 | orchestrator | Thursday 05 June 2025 19:33:11 +0000 (0:00:00.502) 0:00:04.385 ********* 2025-06-05 19:33:12.184959 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:33:12.186442 | orchestrator | 2025-06-05 19:33:12.187327 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:12.190842 | orchestrator | Thursday 05 June 2025 19:33:12 +0000 (0:00:00.651) 0:00:05.036 ********* 2025-06-05 19:33:12.533435 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-05 19:33:12.533524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-05 19:33:12.533540 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-05 19:33:12.534387 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-05 19:33:12.534412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-05 19:33:12.535155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-05 19:33:12.535857 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-05 19:33:12.536198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-05 19:33:12.536523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-05 19:33:12.537035 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-05 19:33:12.538838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-05 19:33:12.539004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-05 19:33:12.539451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-05 19:33:12.539997 | orchestrator | 2025-06-05 19:33:12.541022 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:12.541072 | orchestrator | Thursday 05 June 2025 19:33:12 +0000 (0:00:00.347) 0:00:05.384 ********* 2025-06-05 19:33:12.703199 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:12.704934 | orchestrator | 2025-06-05 19:33:12.704963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:12.706276 | orchestrator | Thursday 05 June 2025 19:33:12 +0000 (0:00:00.168) 0:00:05.552 ********* 2025-06-05 19:33:12.886362 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:12.886486 | orchestrator | 2025-06-05 19:33:12.886584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:12.887080 | orchestrator | Thursday 05 June 2025 19:33:12 +0000 (0:00:00.185) 0:00:05.738 ********* 2025-06-05 19:33:13.049919 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.050242 | orchestrator | 2025-06-05 19:33:13.050639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.051403 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.163) 0:00:05.901 ********* 2025-06-05 19:33:13.223590 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.223698 | orchestrator | 2025-06-05 19:33:13.224316 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.225296 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.171) 0:00:06.073 ********* 2025-06-05 19:33:13.392867 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.393031 | orchestrator | 2025-06-05 19:33:13.393999 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.394863 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.170) 0:00:06.243 ********* 2025-06-05 19:33:13.593603 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.594408 | orchestrator | 2025-06-05 19:33:13.594874 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.596103 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.201) 0:00:06.445 ********* 2025-06-05 19:33:13.803211 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.803294 | orchestrator | 2025-06-05 19:33:13.803309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.803972 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.205) 0:00:06.650 ********* 2025-06-05 19:33:13.980806 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:13.982700 | orchestrator | 2025-06-05 19:33:13.983296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:13.984194 | orchestrator | Thursday 05 June 2025 19:33:13 +0000 (0:00:00.181) 0:00:06.831 ********* 2025-06-05 19:33:15.016527 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-05 19:33:15.017291 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-05 19:33:15.017814 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-05 19:33:15.021450 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-05 19:33:15.021520 | orchestrator | 2025-06-05 19:33:15.021735 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:15.022252 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:01.036) 0:00:07.867 ********* 2025-06-05 19:33:15.207274 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:15.209877 | orchestrator | 2025-06-05 19:33:15.210561 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:15.211337 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:00.190) 0:00:08.057 ********* 2025-06-05 19:33:15.395710 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:15.395783 | orchestrator | 2025-06-05 19:33:15.397971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:15.398545 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:00.189) 0:00:08.246 ********* 2025-06-05 19:33:15.614505 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:15.614728 | orchestrator | 2025-06-05 19:33:15.615170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:15.615569 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:00.216) 0:00:08.463 ********* 2025-06-05 19:33:15.819027 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:15.819923 | orchestrator | 2025-06-05 19:33:15.821140 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-05 19:33:15.823335 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:00.205) 0:00:08.669 ********* 2025-06-05 19:33:15.992719 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-05 19:33:15.993122 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-05 19:33:15.993551 | orchestrator | 2025-06-05 19:33:15.994107 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-05 19:33:15.994610 | orchestrator | Thursday 05 June 2025 19:33:15 +0000 (0:00:00.175) 0:00:08.844 ********* 2025-06-05 19:33:16.125391 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:16.127129 | orchestrator | 2025-06-05 19:33:16.128377 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-05 19:33:16.129244 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.131) 0:00:08.976 ********* 2025-06-05 19:33:16.255804 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:16.256006 | orchestrator | 2025-06-05 19:33:16.256601 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-05 19:33:16.257276 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.131) 0:00:09.107 ********* 2025-06-05 19:33:16.367812 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:16.369076 | orchestrator | 2025-06-05 19:33:16.370173 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-05 19:33:16.370251 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.112) 0:00:09.219 ********* 2025-06-05 19:33:16.508487 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:33:16.512034 | orchestrator | 2025-06-05 19:33:16.513760 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-05 19:33:16.514754 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.137) 0:00:09.356 ********* 2025-06-05 19:33:16.664422 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f5969faa-081d-5d9e-9303-7a3301cb4b7a'}}) 2025-06-05 19:33:16.664573 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46c2c746-0272-5326-baff-0a3e04c6e4bf'}}) 2025-06-05 19:33:16.665486 | orchestrator | 2025-06-05 19:33:16.666912 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-05 19:33:16.667209 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.158) 0:00:09.515 ********* 2025-06-05 19:33:16.796400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f5969faa-081d-5d9e-9303-7a3301cb4b7a'}})  2025-06-05 19:33:16.799608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46c2c746-0272-5326-baff-0a3e04c6e4bf'}})  2025-06-05 19:33:16.799628 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:16.799841 | orchestrator | 2025-06-05 19:33:16.800354 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-05 19:33:16.801325 | orchestrator | Thursday 05 June 2025 19:33:16 +0000 (0:00:00.130) 0:00:09.646 ********* 2025-06-05 19:33:17.062254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f5969faa-081d-5d9e-9303-7a3301cb4b7a'}})  2025-06-05 19:33:17.062394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46c2c746-0272-5326-baff-0a3e04c6e4bf'}})  2025-06-05 19:33:17.062484 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:17.063048 | orchestrator | 2025-06-05 19:33:17.063291 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-05 19:33:17.063634 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.267) 0:00:09.913 ********* 2025-06-05 19:33:17.174608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f5969faa-081d-5d9e-9303-7a3301cb4b7a'}})  2025-06-05 19:33:17.175188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46c2c746-0272-5326-baff-0a3e04c6e4bf'}})  2025-06-05 19:33:17.175864 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:17.176653 | orchestrator | 2025-06-05 19:33:17.177003 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-05 19:33:17.179430 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.112) 0:00:10.026 ********* 2025-06-05 19:33:17.305890 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:33:17.307999 | orchestrator | 2025-06-05 19:33:17.308905 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-05 19:33:17.310812 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.130) 0:00:10.157 ********* 2025-06-05 19:33:17.419967 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:33:17.420615 | orchestrator | 2025-06-05 19:33:17.421692 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-05 19:33:17.422924 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.115) 0:00:10.272 ********* 2025-06-05 19:33:17.555866 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:17.557418 | orchestrator | 2025-06-05 19:33:17.558291 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-05 19:33:17.558569 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.135) 0:00:10.408 ********* 2025-06-05 19:33:17.676095 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:17.682204 | orchestrator | 2025-06-05 19:33:17.682575 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-05 19:33:17.682843 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.119) 0:00:10.527 ********* 2025-06-05 19:33:17.812823 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:17.813418 | orchestrator | 2025-06-05 19:33:17.814552 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-05 19:33:17.816305 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.135) 0:00:10.663 ********* 2025-06-05 19:33:17.958235 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:33:17.959356 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:17.961050 | orchestrator |  "sdb": { 2025-06-05 19:33:17.961376 | orchestrator |  "osd_lvm_uuid": "f5969faa-081d-5d9e-9303-7a3301cb4b7a" 2025-06-05 19:33:17.961841 | orchestrator |  }, 2025-06-05 19:33:17.961972 | orchestrator |  "sdc": { 2025-06-05 19:33:17.962442 | orchestrator |  "osd_lvm_uuid": "46c2c746-0272-5326-baff-0a3e04c6e4bf" 2025-06-05 19:33:17.963987 | orchestrator |  } 2025-06-05 19:33:17.964162 | orchestrator |  } 2025-06-05 19:33:17.964851 | orchestrator | } 2025-06-05 19:33:17.966101 | orchestrator | 2025-06-05 19:33:17.966377 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-05 19:33:17.967036 | orchestrator | Thursday 05 June 2025 19:33:17 +0000 (0:00:00.146) 0:00:10.810 ********* 2025-06-05 19:33:18.089539 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:18.089875 | orchestrator | 2025-06-05 19:33:18.090075 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-05 19:33:18.090449 | orchestrator | Thursday 05 June 2025 19:33:18 +0000 (0:00:00.128) 0:00:10.938 ********* 2025-06-05 19:33:18.206550 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:18.207945 | orchestrator | 2025-06-05 19:33:18.208401 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-05 19:33:18.209714 | orchestrator | Thursday 05 June 2025 19:33:18 +0000 (0:00:00.120) 0:00:11.059 ********* 2025-06-05 19:33:18.320818 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:33:18.320998 | orchestrator | 2025-06-05 19:33:18.321438 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-05 19:33:18.322502 | orchestrator | Thursday 05 June 2025 19:33:18 +0000 (0:00:00.113) 0:00:11.172 ********* 2025-06-05 19:33:18.524105 | orchestrator | changed: [testbed-node-3] => { 2025-06-05 19:33:18.524910 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-05 19:33:18.524933 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:18.525185 | orchestrator |  "sdb": { 2025-06-05 19:33:18.525215 | orchestrator |  "osd_lvm_uuid": "f5969faa-081d-5d9e-9303-7a3301cb4b7a" 2025-06-05 19:33:18.525337 | orchestrator |  }, 2025-06-05 19:33:18.525416 | orchestrator |  "sdc": { 2025-06-05 19:33:18.525871 | orchestrator |  "osd_lvm_uuid": "46c2c746-0272-5326-baff-0a3e04c6e4bf" 2025-06-05 19:33:18.529315 | orchestrator |  } 2025-06-05 19:33:18.529814 | orchestrator |  }, 2025-06-05 19:33:18.530075 | orchestrator |  "lvm_volumes": [ 2025-06-05 19:33:18.530328 | orchestrator |  { 2025-06-05 19:33:18.532598 | orchestrator |  "data": "osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a", 2025-06-05 19:33:18.533132 | orchestrator |  "data_vg": "ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a" 2025-06-05 19:33:18.534290 | orchestrator |  }, 2025-06-05 19:33:18.534440 | orchestrator |  { 2025-06-05 19:33:18.534542 | orchestrator |  "data": "osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf", 2025-06-05 19:33:18.534875 | orchestrator |  "data_vg": "ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf" 2025-06-05 19:33:18.535581 | orchestrator |  } 2025-06-05 19:33:18.536261 | orchestrator |  ] 2025-06-05 19:33:18.536462 | orchestrator |  } 2025-06-05 19:33:18.537257 | orchestrator | } 2025-06-05 19:33:18.537749 | orchestrator | 2025-06-05 19:33:18.538428 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-05 19:33:18.538629 | orchestrator | Thursday 05 June 2025 19:33:18 +0000 (0:00:00.201) 0:00:11.374 ********* 2025-06-05 19:33:20.368466 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:20.369775 | orchestrator | 2025-06-05 19:33:20.370881 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-05 19:33:20.371683 | orchestrator | 2025-06-05 19:33:20.372171 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:33:20.373467 | orchestrator | Thursday 05 June 2025 19:33:20 +0000 (0:00:01.845) 0:00:13.219 ********* 2025-06-05 19:33:20.586102 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:20.586166 | orchestrator | 2025-06-05 19:33:20.589629 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:33:20.589955 | orchestrator | Thursday 05 June 2025 19:33:20 +0000 (0:00:00.216) 0:00:13.436 ********* 2025-06-05 19:33:20.780313 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:33:20.781012 | orchestrator | 2025-06-05 19:33:20.781074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:20.781140 | orchestrator | Thursday 05 June 2025 19:33:20 +0000 (0:00:00.195) 0:00:13.632 ********* 2025-06-05 19:33:21.134812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-05 19:33:21.135408 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-05 19:33:21.136696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-05 19:33:21.136802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-05 19:33:21.139105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-05 19:33:21.139806 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-05 19:33:21.140248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-05 19:33:21.140584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-05 19:33:21.141879 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-05 19:33:21.142324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-05 19:33:21.142773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-05 19:33:21.143151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-05 19:33:21.143551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-05 19:33:21.144150 | orchestrator | 2025-06-05 19:33:21.144397 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:21.144861 | orchestrator | Thursday 05 June 2025 19:33:21 +0000 (0:00:00.353) 0:00:13.985 ********* 2025-06-05 19:33:21.307961 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:21.308138 | orchestrator | 2025-06-05 19:33:21.308724 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:21.309206 | orchestrator | Thursday 05 June 2025 19:33:21 +0000 (0:00:00.172) 0:00:14.158 ********* 2025-06-05 19:33:21.482853 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:21.484432 | orchestrator | 2025-06-05 19:33:21.486140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:21.487691 | orchestrator | Thursday 05 June 2025 19:33:21 +0000 (0:00:00.175) 0:00:14.333 ********* 2025-06-05 19:33:21.669462 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:21.670006 | orchestrator | 2025-06-05 19:33:21.671112 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:21.675385 | orchestrator | Thursday 05 June 2025 19:33:21 +0000 (0:00:00.186) 0:00:14.520 ********* 2025-06-05 19:33:21.863299 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:21.863934 | orchestrator | 2025-06-05 19:33:21.866933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:21.868364 | orchestrator | Thursday 05 June 2025 19:33:21 +0000 (0:00:00.192) 0:00:14.712 ********* 2025-06-05 19:33:22.471311 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:22.473400 | orchestrator | 2025-06-05 19:33:22.475288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:22.476302 | orchestrator | Thursday 05 June 2025 19:33:22 +0000 (0:00:00.610) 0:00:15.323 ********* 2025-06-05 19:33:22.672651 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:22.675436 | orchestrator | 2025-06-05 19:33:22.676347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:22.678919 | orchestrator | Thursday 05 June 2025 19:33:22 +0000 (0:00:00.197) 0:00:15.520 ********* 2025-06-05 19:33:22.895019 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:22.895169 | orchestrator | 2025-06-05 19:33:22.895325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:22.895808 | orchestrator | Thursday 05 June 2025 19:33:22 +0000 (0:00:00.225) 0:00:15.746 ********* 2025-06-05 19:33:23.082151 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:23.083468 | orchestrator | 2025-06-05 19:33:23.084701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:23.088364 | orchestrator | Thursday 05 June 2025 19:33:23 +0000 (0:00:00.187) 0:00:15.933 ********* 2025-06-05 19:33:23.469704 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9) 2025-06-05 19:33:23.471726 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9) 2025-06-05 19:33:23.472587 | orchestrator | 2025-06-05 19:33:23.473443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:23.477795 | orchestrator | Thursday 05 June 2025 19:33:23 +0000 (0:00:00.387) 0:00:16.320 ********* 2025-06-05 19:33:23.870569 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25) 2025-06-05 19:33:23.872483 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25) 2025-06-05 19:33:23.873865 | orchestrator | 2025-06-05 19:33:23.877708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:23.879306 | orchestrator | Thursday 05 June 2025 19:33:23 +0000 (0:00:00.400) 0:00:16.720 ********* 2025-06-05 19:33:24.329077 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f) 2025-06-05 19:33:24.329777 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f) 2025-06-05 19:33:24.330610 | orchestrator | 2025-06-05 19:33:24.331368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:24.332085 | orchestrator | Thursday 05 June 2025 19:33:24 +0000 (0:00:00.459) 0:00:17.180 ********* 2025-06-05 19:33:24.752900 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2) 2025-06-05 19:33:24.756175 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2) 2025-06-05 19:33:24.762065 | orchestrator | 2025-06-05 19:33:24.763531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:24.766717 | orchestrator | Thursday 05 June 2025 19:33:24 +0000 (0:00:00.422) 0:00:17.603 ********* 2025-06-05 19:33:25.068411 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:33:25.068591 | orchestrator | 2025-06-05 19:33:25.068807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:25.069146 | orchestrator | Thursday 05 June 2025 19:33:25 +0000 (0:00:00.317) 0:00:17.920 ********* 2025-06-05 19:33:25.419062 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-05 19:33:25.420451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-05 19:33:25.422072 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-05 19:33:25.425490 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-05 19:33:25.426297 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-05 19:33:25.427169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-05 19:33:25.427903 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-05 19:33:25.428871 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-05 19:33:25.430130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-05 19:33:25.431245 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-05 19:33:25.435565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-05 19:33:25.435943 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-05 19:33:25.436469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-05 19:33:25.439052 | orchestrator | 2025-06-05 19:33:25.442151 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:25.442204 | orchestrator | Thursday 05 June 2025 19:33:25 +0000 (0:00:00.349) 0:00:18.269 ********* 2025-06-05 19:33:25.618249 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:25.622370 | orchestrator | 2025-06-05 19:33:25.622400 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:25.622490 | orchestrator | Thursday 05 June 2025 19:33:25 +0000 (0:00:00.195) 0:00:18.464 ********* 2025-06-05 19:33:26.279779 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:26.280497 | orchestrator | 2025-06-05 19:33:26.282350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:26.283491 | orchestrator | Thursday 05 June 2025 19:33:26 +0000 (0:00:00.662) 0:00:19.127 ********* 2025-06-05 19:33:26.470226 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:26.470340 | orchestrator | 2025-06-05 19:33:26.471335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:26.472487 | orchestrator | Thursday 05 June 2025 19:33:26 +0000 (0:00:00.192) 0:00:19.320 ********* 2025-06-05 19:33:26.682668 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:26.683547 | orchestrator | 2025-06-05 19:33:26.684322 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:26.685423 | orchestrator | Thursday 05 June 2025 19:33:26 +0000 (0:00:00.214) 0:00:19.534 ********* 2025-06-05 19:33:26.886516 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:26.887548 | orchestrator | 2025-06-05 19:33:26.888679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:26.889399 | orchestrator | Thursday 05 June 2025 19:33:26 +0000 (0:00:00.203) 0:00:19.738 ********* 2025-06-05 19:33:27.088268 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:27.088478 | orchestrator | 2025-06-05 19:33:27.089615 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:27.090578 | orchestrator | Thursday 05 June 2025 19:33:27 +0000 (0:00:00.198) 0:00:19.936 ********* 2025-06-05 19:33:27.271978 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:27.272585 | orchestrator | 2025-06-05 19:33:27.273469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:27.274368 | orchestrator | Thursday 05 June 2025 19:33:27 +0000 (0:00:00.186) 0:00:20.123 ********* 2025-06-05 19:33:27.458120 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:27.458689 | orchestrator | 2025-06-05 19:33:27.459352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:27.460025 | orchestrator | Thursday 05 June 2025 19:33:27 +0000 (0:00:00.186) 0:00:20.309 ********* 2025-06-05 19:33:28.086354 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-05 19:33:28.087084 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-05 19:33:28.088019 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-05 19:33:28.089015 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-05 19:33:28.093004 | orchestrator | 2025-06-05 19:33:28.093135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:28.094085 | orchestrator | Thursday 05 June 2025 19:33:28 +0000 (0:00:00.627) 0:00:20.936 ********* 2025-06-05 19:33:28.289521 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:28.291069 | orchestrator | 2025-06-05 19:33:28.296559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:28.296996 | orchestrator | Thursday 05 June 2025 19:33:28 +0000 (0:00:00.202) 0:00:21.139 ********* 2025-06-05 19:33:28.487501 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:28.487759 | orchestrator | 2025-06-05 19:33:28.488740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:28.489828 | orchestrator | Thursday 05 June 2025 19:33:28 +0000 (0:00:00.200) 0:00:21.339 ********* 2025-06-05 19:33:28.680773 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:28.682740 | orchestrator | 2025-06-05 19:33:28.683304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:28.685726 | orchestrator | Thursday 05 June 2025 19:33:28 +0000 (0:00:00.191) 0:00:21.531 ********* 2025-06-05 19:33:28.962007 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:28.962269 | orchestrator | 2025-06-05 19:33:28.965094 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-05 19:33:28.966078 | orchestrator | Thursday 05 June 2025 19:33:28 +0000 (0:00:00.279) 0:00:21.810 ********* 2025-06-05 19:33:29.295349 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-05 19:33:29.296015 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-05 19:33:29.297293 | orchestrator | 2025-06-05 19:33:29.298239 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-05 19:33:29.299773 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.335) 0:00:22.146 ********* 2025-06-05 19:33:29.434389 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:29.435209 | orchestrator | 2025-06-05 19:33:29.436202 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-05 19:33:29.437113 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.139) 0:00:22.286 ********* 2025-06-05 19:33:29.561845 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:29.562731 | orchestrator | 2025-06-05 19:33:29.563396 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-05 19:33:29.563920 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.127) 0:00:22.413 ********* 2025-06-05 19:33:29.693723 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:29.695778 | orchestrator | 2025-06-05 19:33:29.700575 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-05 19:33:29.701819 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.131) 0:00:22.544 ********* 2025-06-05 19:33:29.823260 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:33:29.824215 | orchestrator | 2025-06-05 19:33:29.825788 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-05 19:33:29.829432 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.129) 0:00:22.674 ********* 2025-06-05 19:33:29.974513 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f7f7c2a-d649-5a85-84b6-7657bf908d98'}}) 2025-06-05 19:33:29.975299 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67c48ddb-095b-5044-89f7-89f2250f1a91'}}) 2025-06-05 19:33:29.976754 | orchestrator | 2025-06-05 19:33:29.977504 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-05 19:33:29.982110 | orchestrator | Thursday 05 June 2025 19:33:29 +0000 (0:00:00.151) 0:00:22.826 ********* 2025-06-05 19:33:30.112844 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f7f7c2a-d649-5a85-84b6-7657bf908d98'}})  2025-06-05 19:33:30.113458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67c48ddb-095b-5044-89f7-89f2250f1a91'}})  2025-06-05 19:33:30.114936 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:30.115821 | orchestrator | 2025-06-05 19:33:30.120042 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-05 19:33:30.121014 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.138) 0:00:22.964 ********* 2025-06-05 19:33:30.255410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f7f7c2a-d649-5a85-84b6-7657bf908d98'}})  2025-06-05 19:33:30.256017 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67c48ddb-095b-5044-89f7-89f2250f1a91'}})  2025-06-05 19:33:30.258205 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:30.261774 | orchestrator | 2025-06-05 19:33:30.262750 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-05 19:33:30.263449 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.142) 0:00:23.106 ********* 2025-06-05 19:33:30.396206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f7f7c2a-d649-5a85-84b6-7657bf908d98'}})  2025-06-05 19:33:30.397301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67c48ddb-095b-5044-89f7-89f2250f1a91'}})  2025-06-05 19:33:30.398524 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:30.403045 | orchestrator | 2025-06-05 19:33:30.403088 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-05 19:33:30.403904 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.141) 0:00:23.247 ********* 2025-06-05 19:33:30.529409 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:33:30.530189 | orchestrator | 2025-06-05 19:33:30.531799 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-05 19:33:30.535698 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.132) 0:00:23.380 ********* 2025-06-05 19:33:30.666850 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:33:30.668591 | orchestrator | 2025-06-05 19:33:30.669836 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-05 19:33:30.674289 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.136) 0:00:23.517 ********* 2025-06-05 19:33:30.784004 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:30.784908 | orchestrator | 2025-06-05 19:33:30.786235 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-05 19:33:30.787215 | orchestrator | Thursday 05 June 2025 19:33:30 +0000 (0:00:00.117) 0:00:23.635 ********* 2025-06-05 19:33:31.115906 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:31.117327 | orchestrator | 2025-06-05 19:33:31.121233 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-05 19:33:31.122183 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.330) 0:00:23.965 ********* 2025-06-05 19:33:31.238613 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:31.240187 | orchestrator | 2025-06-05 19:33:31.241550 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-05 19:33:31.245428 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.123) 0:00:24.089 ********* 2025-06-05 19:33:31.381734 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:33:31.382718 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:31.383180 | orchestrator |  "sdb": { 2025-06-05 19:33:31.384632 | orchestrator |  "osd_lvm_uuid": "9f7f7c2a-d649-5a85-84b6-7657bf908d98" 2025-06-05 19:33:31.385582 | orchestrator |  }, 2025-06-05 19:33:31.386515 | orchestrator |  "sdc": { 2025-06-05 19:33:31.388205 | orchestrator |  "osd_lvm_uuid": "67c48ddb-095b-5044-89f7-89f2250f1a91" 2025-06-05 19:33:31.389096 | orchestrator |  } 2025-06-05 19:33:31.389693 | orchestrator |  } 2025-06-05 19:33:31.390601 | orchestrator | } 2025-06-05 19:33:31.391270 | orchestrator | 2025-06-05 19:33:31.391888 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-05 19:33:31.393158 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.142) 0:00:24.231 ********* 2025-06-05 19:33:31.521860 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:31.522337 | orchestrator | 2025-06-05 19:33:31.523575 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-05 19:33:31.529150 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.140) 0:00:24.372 ********* 2025-06-05 19:33:31.644484 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:31.647006 | orchestrator | 2025-06-05 19:33:31.647129 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-05 19:33:31.647180 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.123) 0:00:24.495 ********* 2025-06-05 19:33:31.766076 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:33:31.767587 | orchestrator | 2025-06-05 19:33:31.771540 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-05 19:33:31.772869 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.119) 0:00:24.615 ********* 2025-06-05 19:33:31.965503 | orchestrator | changed: [testbed-node-4] => { 2025-06-05 19:33:31.968325 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-05 19:33:31.972154 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:31.973486 | orchestrator |  "sdb": { 2025-06-05 19:33:31.975359 | orchestrator |  "osd_lvm_uuid": "9f7f7c2a-d649-5a85-84b6-7657bf908d98" 2025-06-05 19:33:31.976384 | orchestrator |  }, 2025-06-05 19:33:31.977778 | orchestrator |  "sdc": { 2025-06-05 19:33:31.978902 | orchestrator |  "osd_lvm_uuid": "67c48ddb-095b-5044-89f7-89f2250f1a91" 2025-06-05 19:33:31.979820 | orchestrator |  } 2025-06-05 19:33:31.983139 | orchestrator |  }, 2025-06-05 19:33:31.983447 | orchestrator |  "lvm_volumes": [ 2025-06-05 19:33:31.984080 | orchestrator |  { 2025-06-05 19:33:31.984638 | orchestrator |  "data": "osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98", 2025-06-05 19:33:31.985232 | orchestrator |  "data_vg": "ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98" 2025-06-05 19:33:31.985840 | orchestrator |  }, 2025-06-05 19:33:31.986340 | orchestrator |  { 2025-06-05 19:33:31.988036 | orchestrator |  "data": "osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91", 2025-06-05 19:33:31.988405 | orchestrator |  "data_vg": "ceph-67c48ddb-095b-5044-89f7-89f2250f1a91" 2025-06-05 19:33:31.988784 | orchestrator |  } 2025-06-05 19:33:31.989136 | orchestrator |  ] 2025-06-05 19:33:31.989460 | orchestrator |  } 2025-06-05 19:33:31.989850 | orchestrator | } 2025-06-05 19:33:31.990982 | orchestrator | 2025-06-05 19:33:31.991005 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-05 19:33:31.991039 | orchestrator | Thursday 05 June 2025 19:33:31 +0000 (0:00:00.201) 0:00:24.816 ********* 2025-06-05 19:33:32.844452 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:32.844539 | orchestrator | 2025-06-05 19:33:32.846074 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-05 19:33:32.848105 | orchestrator | 2025-06-05 19:33:32.851146 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:33:32.853271 | orchestrator | Thursday 05 June 2025 19:33:32 +0000 (0:00:00.877) 0:00:25.694 ********* 2025-06-05 19:33:33.198913 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:33.199179 | orchestrator | 2025-06-05 19:33:33.203111 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:33:33.203984 | orchestrator | Thursday 05 June 2025 19:33:33 +0000 (0:00:00.356) 0:00:26.050 ********* 2025-06-05 19:33:33.650518 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:33:33.651130 | orchestrator | 2025-06-05 19:33:33.655587 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:33.656243 | orchestrator | Thursday 05 June 2025 19:33:33 +0000 (0:00:00.451) 0:00:26.501 ********* 2025-06-05 19:33:33.987018 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-05 19:33:33.990903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-05 19:33:33.992187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-05 19:33:33.993097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-05 19:33:33.994363 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-05 19:33:33.996750 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-05 19:33:33.997173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-05 19:33:33.998096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-05 19:33:33.999043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-05 19:33:34.000013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-05 19:33:34.001407 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-05 19:33:34.002626 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-05 19:33:34.005119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-05 19:33:34.005879 | orchestrator | 2025-06-05 19:33:34.007569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.008690 | orchestrator | Thursday 05 June 2025 19:33:33 +0000 (0:00:00.335) 0:00:26.837 ********* 2025-06-05 19:33:34.170328 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:34.171157 | orchestrator | 2025-06-05 19:33:34.172119 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.173158 | orchestrator | Thursday 05 June 2025 19:33:34 +0000 (0:00:00.184) 0:00:27.022 ********* 2025-06-05 19:33:34.352762 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:34.353410 | orchestrator | 2025-06-05 19:33:34.354145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.354918 | orchestrator | Thursday 05 June 2025 19:33:34 +0000 (0:00:00.182) 0:00:27.204 ********* 2025-06-05 19:33:34.566274 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:34.570074 | orchestrator | 2025-06-05 19:33:34.570115 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.570129 | orchestrator | Thursday 05 June 2025 19:33:34 +0000 (0:00:00.212) 0:00:27.417 ********* 2025-06-05 19:33:34.730999 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:34.736159 | orchestrator | 2025-06-05 19:33:34.737387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.738607 | orchestrator | Thursday 05 June 2025 19:33:34 +0000 (0:00:00.162) 0:00:27.580 ********* 2025-06-05 19:33:34.907854 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:34.909319 | orchestrator | 2025-06-05 19:33:34.910449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:34.911538 | orchestrator | Thursday 05 June 2025 19:33:34 +0000 (0:00:00.178) 0:00:27.758 ********* 2025-06-05 19:33:35.084108 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:35.085380 | orchestrator | 2025-06-05 19:33:35.086478 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:35.087455 | orchestrator | Thursday 05 June 2025 19:33:35 +0000 (0:00:00.177) 0:00:27.935 ********* 2025-06-05 19:33:35.268292 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:35.268381 | orchestrator | 2025-06-05 19:33:35.268504 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:35.269358 | orchestrator | Thursday 05 June 2025 19:33:35 +0000 (0:00:00.182) 0:00:28.117 ********* 2025-06-05 19:33:35.466755 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:35.466959 | orchestrator | 2025-06-05 19:33:35.468406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:35.469786 | orchestrator | Thursday 05 June 2025 19:33:35 +0000 (0:00:00.200) 0:00:28.317 ********* 2025-06-05 19:33:35.975759 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42) 2025-06-05 19:33:35.976098 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42) 2025-06-05 19:33:35.976865 | orchestrator | 2025-06-05 19:33:35.976923 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:35.976981 | orchestrator | Thursday 05 June 2025 19:33:35 +0000 (0:00:00.510) 0:00:28.828 ********* 2025-06-05 19:33:36.603830 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8) 2025-06-05 19:33:36.604520 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8) 2025-06-05 19:33:36.606166 | orchestrator | 2025-06-05 19:33:36.606894 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:36.607349 | orchestrator | Thursday 05 June 2025 19:33:36 +0000 (0:00:00.626) 0:00:29.455 ********* 2025-06-05 19:33:36.969720 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e) 2025-06-05 19:33:36.970820 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e) 2025-06-05 19:33:36.971695 | orchestrator | 2025-06-05 19:33:36.972573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:36.973033 | orchestrator | Thursday 05 June 2025 19:33:36 +0000 (0:00:00.365) 0:00:29.820 ********* 2025-06-05 19:33:37.346831 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b) 2025-06-05 19:33:37.347285 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b) 2025-06-05 19:33:37.348007 | orchestrator | 2025-06-05 19:33:37.348764 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:33:37.349312 | orchestrator | Thursday 05 June 2025 19:33:37 +0000 (0:00:00.376) 0:00:30.196 ********* 2025-06-05 19:33:37.648411 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:33:37.648779 | orchestrator | 2025-06-05 19:33:37.649188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:37.650964 | orchestrator | Thursday 05 June 2025 19:33:37 +0000 (0:00:00.303) 0:00:30.500 ********* 2025-06-05 19:33:37.973153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-05 19:33:37.973822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-05 19:33:37.975279 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-05 19:33:37.976452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-05 19:33:37.977525 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-05 19:33:37.978360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-05 19:33:37.979238 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-05 19:33:37.979870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-05 19:33:37.980730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-05 19:33:37.981352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-05 19:33:37.981791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-05 19:33:37.982362 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-05 19:33:37.982723 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-05 19:33:37.983140 | orchestrator | 2025-06-05 19:33:37.983530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:37.984012 | orchestrator | Thursday 05 June 2025 19:33:37 +0000 (0:00:00.322) 0:00:30.823 ********* 2025-06-05 19:33:38.154152 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:38.154323 | orchestrator | 2025-06-05 19:33:38.155350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:38.156088 | orchestrator | Thursday 05 June 2025 19:33:38 +0000 (0:00:00.181) 0:00:31.005 ********* 2025-06-05 19:33:38.330095 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:38.331208 | orchestrator | 2025-06-05 19:33:38.331695 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:38.332093 | orchestrator | Thursday 05 June 2025 19:33:38 +0000 (0:00:00.175) 0:00:31.181 ********* 2025-06-05 19:33:38.515121 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:38.515810 | orchestrator | 2025-06-05 19:33:38.516926 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:38.517475 | orchestrator | Thursday 05 June 2025 19:33:38 +0000 (0:00:00.185) 0:00:31.366 ********* 2025-06-05 19:33:38.713097 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:38.714085 | orchestrator | 2025-06-05 19:33:38.715185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:38.715890 | orchestrator | Thursday 05 June 2025 19:33:38 +0000 (0:00:00.197) 0:00:31.564 ********* 2025-06-05 19:33:38.906300 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:38.906493 | orchestrator | 2025-06-05 19:33:38.908111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:38.908141 | orchestrator | Thursday 05 June 2025 19:33:38 +0000 (0:00:00.192) 0:00:31.757 ********* 2025-06-05 19:33:39.521073 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:39.522455 | orchestrator | 2025-06-05 19:33:39.524121 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:39.525163 | orchestrator | Thursday 05 June 2025 19:33:39 +0000 (0:00:00.613) 0:00:32.371 ********* 2025-06-05 19:33:39.712587 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:39.713187 | orchestrator | 2025-06-05 19:33:39.714331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:39.715713 | orchestrator | Thursday 05 June 2025 19:33:39 +0000 (0:00:00.192) 0:00:32.563 ********* 2025-06-05 19:33:39.904863 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:39.904982 | orchestrator | 2025-06-05 19:33:39.906057 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:39.907444 | orchestrator | Thursday 05 June 2025 19:33:39 +0000 (0:00:00.191) 0:00:32.755 ********* 2025-06-05 19:33:40.518626 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-05 19:33:40.518837 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-05 19:33:40.522296 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-05 19:33:40.523179 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-05 19:33:40.524124 | orchestrator | 2025-06-05 19:33:40.524821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:40.526394 | orchestrator | Thursday 05 June 2025 19:33:40 +0000 (0:00:00.612) 0:00:33.367 ********* 2025-06-05 19:33:40.740912 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:40.741075 | orchestrator | 2025-06-05 19:33:40.741804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:40.742868 | orchestrator | Thursday 05 June 2025 19:33:40 +0000 (0:00:00.224) 0:00:33.592 ********* 2025-06-05 19:33:40.927376 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:40.927927 | orchestrator | 2025-06-05 19:33:40.929031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:40.930237 | orchestrator | Thursday 05 June 2025 19:33:40 +0000 (0:00:00.186) 0:00:33.778 ********* 2025-06-05 19:33:41.125627 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:41.125866 | orchestrator | 2025-06-05 19:33:41.126587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:33:41.127446 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.196) 0:00:33.975 ********* 2025-06-05 19:33:41.319462 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:41.319684 | orchestrator | 2025-06-05 19:33:41.320080 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-05 19:33:41.321047 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.194) 0:00:34.170 ********* 2025-06-05 19:33:41.485206 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-05 19:33:41.485924 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-05 19:33:41.488059 | orchestrator | 2025-06-05 19:33:41.489088 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-05 19:33:41.491700 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.165) 0:00:34.335 ********* 2025-06-05 19:33:41.618801 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:41.622768 | orchestrator | 2025-06-05 19:33:41.624386 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-05 19:33:41.625515 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.131) 0:00:34.467 ********* 2025-06-05 19:33:41.765358 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:41.765842 | orchestrator | 2025-06-05 19:33:41.766460 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-05 19:33:41.767203 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.147) 0:00:34.615 ********* 2025-06-05 19:33:41.892131 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:41.892310 | orchestrator | 2025-06-05 19:33:41.892490 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-05 19:33:41.892895 | orchestrator | Thursday 05 June 2025 19:33:41 +0000 (0:00:00.128) 0:00:34.743 ********* 2025-06-05 19:33:42.349718 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:33:42.350416 | orchestrator | 2025-06-05 19:33:42.351225 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-05 19:33:42.351983 | orchestrator | Thursday 05 June 2025 19:33:42 +0000 (0:00:00.455) 0:00:35.199 ********* 2025-06-05 19:33:42.558473 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8d24cd11-dfc5-563c-af80-3beb61f8ef58'}}) 2025-06-05 19:33:42.558712 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'afd5871a-1fd2-5e8b-989c-517ad42902e5'}}) 2025-06-05 19:33:42.558802 | orchestrator | 2025-06-05 19:33:42.559152 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-05 19:33:42.560804 | orchestrator | Thursday 05 June 2025 19:33:42 +0000 (0:00:00.208) 0:00:35.407 ********* 2025-06-05 19:33:42.733717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8d24cd11-dfc5-563c-af80-3beb61f8ef58'}})  2025-06-05 19:33:42.735043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'afd5871a-1fd2-5e8b-989c-517ad42902e5'}})  2025-06-05 19:33:42.735797 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:42.736917 | orchestrator | 2025-06-05 19:33:42.740392 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-05 19:33:42.740486 | orchestrator | Thursday 05 June 2025 19:33:42 +0000 (0:00:00.177) 0:00:35.585 ********* 2025-06-05 19:33:42.878837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8d24cd11-dfc5-563c-af80-3beb61f8ef58'}})  2025-06-05 19:33:42.879793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'afd5871a-1fd2-5e8b-989c-517ad42902e5'}})  2025-06-05 19:33:42.880914 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:42.882138 | orchestrator | 2025-06-05 19:33:42.882793 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-05 19:33:42.883529 | orchestrator | Thursday 05 June 2025 19:33:42 +0000 (0:00:00.145) 0:00:35.730 ********* 2025-06-05 19:33:43.024924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8d24cd11-dfc5-563c-af80-3beb61f8ef58'}})  2025-06-05 19:33:43.025899 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'afd5871a-1fd2-5e8b-989c-517ad42902e5'}})  2025-06-05 19:33:43.027717 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:43.030147 | orchestrator | 2025-06-05 19:33:43.030929 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-05 19:33:43.031813 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.145) 0:00:35.876 ********* 2025-06-05 19:33:43.149452 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:33:43.150181 | orchestrator | 2025-06-05 19:33:43.151232 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-05 19:33:43.152492 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.125) 0:00:36.001 ********* 2025-06-05 19:33:43.288056 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:33:43.288954 | orchestrator | 2025-06-05 19:33:43.290086 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-05 19:33:43.290994 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.137) 0:00:36.139 ********* 2025-06-05 19:33:43.412375 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:43.413388 | orchestrator | 2025-06-05 19:33:43.413917 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-05 19:33:43.414889 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.124) 0:00:36.263 ********* 2025-06-05 19:33:43.543541 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:43.544700 | orchestrator | 2025-06-05 19:33:43.545300 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-05 19:33:43.546576 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.131) 0:00:36.395 ********* 2025-06-05 19:33:43.676546 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:43.676790 | orchestrator | 2025-06-05 19:33:43.677898 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-05 19:33:43.679108 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.132) 0:00:36.527 ********* 2025-06-05 19:33:43.813717 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:33:43.815141 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:43.815592 | orchestrator |  "sdb": { 2025-06-05 19:33:43.816763 | orchestrator |  "osd_lvm_uuid": "8d24cd11-dfc5-563c-af80-3beb61f8ef58" 2025-06-05 19:33:43.817938 | orchestrator |  }, 2025-06-05 19:33:43.818925 | orchestrator |  "sdc": { 2025-06-05 19:33:43.819367 | orchestrator |  "osd_lvm_uuid": "afd5871a-1fd2-5e8b-989c-517ad42902e5" 2025-06-05 19:33:43.820879 | orchestrator |  } 2025-06-05 19:33:43.821599 | orchestrator |  } 2025-06-05 19:33:43.822602 | orchestrator | } 2025-06-05 19:33:43.823675 | orchestrator | 2025-06-05 19:33:43.824910 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-05 19:33:43.825035 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.137) 0:00:36.665 ********* 2025-06-05 19:33:43.941109 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:43.941430 | orchestrator | 2025-06-05 19:33:43.942188 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-05 19:33:43.942939 | orchestrator | Thursday 05 June 2025 19:33:43 +0000 (0:00:00.127) 0:00:36.792 ********* 2025-06-05 19:33:44.278309 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:44.278613 | orchestrator | 2025-06-05 19:33:44.279432 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-05 19:33:44.280020 | orchestrator | Thursday 05 June 2025 19:33:44 +0000 (0:00:00.336) 0:00:37.129 ********* 2025-06-05 19:33:44.418904 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:33:44.419527 | orchestrator | 2025-06-05 19:33:44.420115 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-05 19:33:44.421025 | orchestrator | Thursday 05 June 2025 19:33:44 +0000 (0:00:00.140) 0:00:37.269 ********* 2025-06-05 19:33:44.624485 | orchestrator | changed: [testbed-node-5] => { 2025-06-05 19:33:44.625139 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-05 19:33:44.626191 | orchestrator |  "ceph_osd_devices": { 2025-06-05 19:33:44.627238 | orchestrator |  "sdb": { 2025-06-05 19:33:44.627324 | orchestrator |  "osd_lvm_uuid": "8d24cd11-dfc5-563c-af80-3beb61f8ef58" 2025-06-05 19:33:44.628160 | orchestrator |  }, 2025-06-05 19:33:44.629055 | orchestrator |  "sdc": { 2025-06-05 19:33:44.629857 | orchestrator |  "osd_lvm_uuid": "afd5871a-1fd2-5e8b-989c-517ad42902e5" 2025-06-05 19:33:44.630602 | orchestrator |  } 2025-06-05 19:33:44.631299 | orchestrator |  }, 2025-06-05 19:33:44.631799 | orchestrator |  "lvm_volumes": [ 2025-06-05 19:33:44.632299 | orchestrator |  { 2025-06-05 19:33:44.632984 | orchestrator |  "data": "osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58", 2025-06-05 19:33:44.633628 | orchestrator |  "data_vg": "ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58" 2025-06-05 19:33:44.633981 | orchestrator |  }, 2025-06-05 19:33:44.634482 | orchestrator |  { 2025-06-05 19:33:44.635188 | orchestrator |  "data": "osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5", 2025-06-05 19:33:44.635402 | orchestrator |  "data_vg": "ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5" 2025-06-05 19:33:44.635905 | orchestrator |  } 2025-06-05 19:33:44.636285 | orchestrator |  ] 2025-06-05 19:33:44.636767 | orchestrator |  } 2025-06-05 19:33:44.637119 | orchestrator | } 2025-06-05 19:33:44.637499 | orchestrator | 2025-06-05 19:33:44.637984 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-05 19:33:44.638294 | orchestrator | Thursday 05 June 2025 19:33:44 +0000 (0:00:00.203) 0:00:37.473 ********* 2025-06-05 19:33:45.614608 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-05 19:33:45.615271 | orchestrator | 2025-06-05 19:33:45.616075 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:33:45.617134 | orchestrator | 2025-06-05 19:33:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:33:45.617157 | orchestrator | 2025-06-05 19:33:45 | INFO  | Please wait and do not abort execution. 2025-06-05 19:33:45.618417 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-05 19:33:45.619179 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-05 19:33:45.619541 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-05 19:33:45.620354 | orchestrator | 2025-06-05 19:33:45.621395 | orchestrator | 2025-06-05 19:33:45.621781 | orchestrator | 2025-06-05 19:33:45.622380 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:33:45.622924 | orchestrator | Thursday 05 June 2025 19:33:45 +0000 (0:00:00.990) 0:00:38.464 ********* 2025-06-05 19:33:45.624594 | orchestrator | =============================================================================== 2025-06-05 19:33:45.625128 | orchestrator | Write configuration file ------------------------------------------------ 3.71s 2025-06-05 19:33:45.625467 | orchestrator | Add known partitions to the list of available block devices ------------- 1.04s 2025-06-05 19:33:45.626228 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2025-06-05 19:33:45.626737 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-06-05 19:33:45.627131 | orchestrator | Get initial list of available block devices ----------------------------- 0.84s 2025-06-05 19:33:45.627839 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.81s 2025-06-05 19:33:45.628300 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.72s 2025-06-05 19:33:45.628795 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-06-05 19:33:45.629385 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2025-06-05 19:33:45.629722 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-06-05 19:33:45.630379 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-05 19:33:45.630891 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-06-05 19:33:45.631378 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-05 19:33:45.631822 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-05 19:33:45.632264 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-06-05 19:33:45.633467 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-06-05 19:33:45.634247 | orchestrator | Set WAL devices config data --------------------------------------------- 0.58s 2025-06-05 19:33:45.634955 | orchestrator | Print DB devices -------------------------------------------------------- 0.58s 2025-06-05 19:33:45.635744 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.55s 2025-06-05 19:33:45.636555 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-06-05 19:33:57.975053 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:33:57.975165 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:33:57.975179 | orchestrator | Registering Redlock._release_script 2025-06-05 19:33:58.031807 | orchestrator | 2025-06-05 19:33:58 | INFO  | Task 6d7a284c-635a-4b39-bb26-c3a282d8df16 (sync inventory) is running in background. Output coming soon. 2025-06-05 19:34:15.978284 | orchestrator | 2025-06-05 19:33:59 | INFO  | Starting group_vars file reorganization 2025-06-05 19:34:15.978388 | orchestrator | 2025-06-05 19:33:59 | INFO  | Moved 0 file(s) to their respective directories 2025-06-05 19:34:15.978406 | orchestrator | 2025-06-05 19:33:59 | INFO  | Group_vars file reorganization completed 2025-06-05 19:34:15.978418 | orchestrator | 2025-06-05 19:34:01 | INFO  | Starting variable preparation from inventory 2025-06-05 19:34:15.978430 | orchestrator | 2025-06-05 19:34:02 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-05 19:34:15.978441 | orchestrator | 2025-06-05 19:34:02 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-05 19:34:15.978475 | orchestrator | 2025-06-05 19:34:02 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-05 19:34:15.978487 | orchestrator | 2025-06-05 19:34:02 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-05 19:34:15.978498 | orchestrator | 2025-06-05 19:34:02 | INFO  | Variable preparation completed: 2025-06-05 19:34:15.978510 | orchestrator | 2025-06-05 19:34:03 | INFO  | Starting inventory overwrite handling 2025-06-05 19:34:15.978521 | orchestrator | 2025-06-05 19:34:03 | INFO  | Handling group overwrites in 99-overwrite 2025-06-05 19:34:15.978532 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group frr:children from 60-generic 2025-06-05 19:34:15.978543 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group storage:children from 50-kolla 2025-06-05 19:34:15.978554 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-05 19:34:15.978572 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-05 19:34:15.978584 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-05 19:34:15.978595 | orchestrator | 2025-06-05 19:34:03 | INFO  | Handling group overwrites in 20-roles 2025-06-05 19:34:15.978606 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-05 19:34:15.978650 | orchestrator | 2025-06-05 19:34:03 | INFO  | Removed 6 group(s) in total 2025-06-05 19:34:15.978662 | orchestrator | 2025-06-05 19:34:03 | INFO  | Inventory overwrite handling completed 2025-06-05 19:34:15.978673 | orchestrator | 2025-06-05 19:34:04 | INFO  | Starting merge of inventory files 2025-06-05 19:34:15.978684 | orchestrator | 2025-06-05 19:34:04 | INFO  | Inventory files merged successfully 2025-06-05 19:34:15.978695 | orchestrator | 2025-06-05 19:34:08 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-05 19:34:15.978706 | orchestrator | 2025-06-05 19:34:14 | INFO  | Successfully wrote ClusterShell configuration 2025-06-05 19:34:15.978717 | orchestrator | [master 0922ef0] 2025-06-05-19-34 2025-06-05 19:34:15.978730 | orchestrator | 1 file changed, 30 insertions(+), 3 deletions(-) 2025-06-05 19:34:17.650882 | orchestrator | 2025-06-05 19:34:17 | INFO  | Task dac9b04b-89c9-41b2-85f1-b48aaabadb5e (ceph-create-lvm-devices) was prepared for execution. 2025-06-05 19:34:17.650976 | orchestrator | 2025-06-05 19:34:17 | INFO  | It takes a moment until task dac9b04b-89c9-41b2-85f1-b48aaabadb5e (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-05 19:34:21.269350 | orchestrator | 2025-06-05 19:34:21.270369 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-05 19:34:21.271690 | orchestrator | 2025-06-05 19:34:21.272597 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:34:21.273423 | orchestrator | Thursday 05 June 2025 19:34:21 +0000 (0:00:00.226) 0:00:00.226 ********* 2025-06-05 19:34:21.475225 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 19:34:21.475371 | orchestrator | 2025-06-05 19:34:21.475864 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:34:21.476558 | orchestrator | Thursday 05 June 2025 19:34:21 +0000 (0:00:00.208) 0:00:00.434 ********* 2025-06-05 19:34:21.666324 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:21.667008 | orchestrator | 2025-06-05 19:34:21.667786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:21.668374 | orchestrator | Thursday 05 June 2025 19:34:21 +0000 (0:00:00.191) 0:00:00.625 ********* 2025-06-05 19:34:22.014185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-05 19:34:22.015314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-05 19:34:22.016054 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-05 19:34:22.017485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-05 19:34:22.018241 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-05 19:34:22.019303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-05 19:34:22.020155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-05 19:34:22.021081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-05 19:34:22.021722 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-05 19:34:22.022309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-05 19:34:22.023061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-05 19:34:22.023720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-05 19:34:22.024434 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-05 19:34:22.025018 | orchestrator | 2025-06-05 19:34:22.026150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:22.026495 | orchestrator | Thursday 05 June 2025 19:34:22 +0000 (0:00:00.347) 0:00:00.973 ********* 2025-06-05 19:34:22.365872 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:22.365956 | orchestrator | 2025-06-05 19:34:22.366089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:22.367221 | orchestrator | Thursday 05 June 2025 19:34:22 +0000 (0:00:00.351) 0:00:01.324 ********* 2025-06-05 19:34:22.540534 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:22.540733 | orchestrator | 2025-06-05 19:34:22.541423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:22.542639 | orchestrator | Thursday 05 June 2025 19:34:22 +0000 (0:00:00.174) 0:00:01.499 ********* 2025-06-05 19:34:22.715167 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:22.715941 | orchestrator | 2025-06-05 19:34:22.716952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:22.717435 | orchestrator | Thursday 05 June 2025 19:34:22 +0000 (0:00:00.174) 0:00:01.674 ********* 2025-06-05 19:34:22.887746 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:22.888320 | orchestrator | 2025-06-05 19:34:22.889881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:22.891086 | orchestrator | Thursday 05 June 2025 19:34:22 +0000 (0:00:00.171) 0:00:01.846 ********* 2025-06-05 19:34:23.062336 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:23.062419 | orchestrator | 2025-06-05 19:34:23.062434 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:23.062523 | orchestrator | Thursday 05 June 2025 19:34:23 +0000 (0:00:00.175) 0:00:02.022 ********* 2025-06-05 19:34:23.228168 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:23.228890 | orchestrator | 2025-06-05 19:34:23.229293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:23.230144 | orchestrator | Thursday 05 June 2025 19:34:23 +0000 (0:00:00.165) 0:00:02.187 ********* 2025-06-05 19:34:23.404942 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:23.405527 | orchestrator | 2025-06-05 19:34:23.406342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:23.407009 | orchestrator | Thursday 05 June 2025 19:34:23 +0000 (0:00:00.175) 0:00:02.363 ********* 2025-06-05 19:34:23.570767 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:23.571111 | orchestrator | 2025-06-05 19:34:23.572252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:23.572665 | orchestrator | Thursday 05 June 2025 19:34:23 +0000 (0:00:00.166) 0:00:02.530 ********* 2025-06-05 19:34:23.938420 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30) 2025-06-05 19:34:23.938840 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30) 2025-06-05 19:34:23.939599 | orchestrator | 2025-06-05 19:34:23.940536 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:23.941339 | orchestrator | Thursday 05 June 2025 19:34:23 +0000 (0:00:00.367) 0:00:02.897 ********* 2025-06-05 19:34:24.283117 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312) 2025-06-05 19:34:24.283354 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312) 2025-06-05 19:34:24.283975 | orchestrator | 2025-06-05 19:34:24.284600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:24.285309 | orchestrator | Thursday 05 June 2025 19:34:24 +0000 (0:00:00.344) 0:00:03.242 ********* 2025-06-05 19:34:24.768396 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441) 2025-06-05 19:34:24.768486 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441) 2025-06-05 19:34:24.769433 | orchestrator | 2025-06-05 19:34:24.770219 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:24.770792 | orchestrator | Thursday 05 June 2025 19:34:24 +0000 (0:00:00.483) 0:00:03.726 ********* 2025-06-05 19:34:25.266392 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766) 2025-06-05 19:34:25.266483 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766) 2025-06-05 19:34:25.267753 | orchestrator | 2025-06-05 19:34:25.268749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:25.269115 | orchestrator | Thursday 05 June 2025 19:34:25 +0000 (0:00:00.498) 0:00:04.225 ********* 2025-06-05 19:34:25.796675 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:34:25.797425 | orchestrator | 2025-06-05 19:34:25.798544 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:25.799313 | orchestrator | Thursday 05 June 2025 19:34:25 +0000 (0:00:00.529) 0:00:04.754 ********* 2025-06-05 19:34:26.159496 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-05 19:34:26.161655 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-05 19:34:26.161736 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-05 19:34:26.162790 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-05 19:34:26.163770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-05 19:34:26.164446 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-05 19:34:26.164998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-05 19:34:26.165924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-05 19:34:26.166702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-05 19:34:26.168938 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-05 19:34:26.169473 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-05 19:34:26.170095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-05 19:34:26.170733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-05 19:34:26.171145 | orchestrator | 2025-06-05 19:34:26.171624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:26.172077 | orchestrator | Thursday 05 June 2025 19:34:26 +0000 (0:00:00.363) 0:00:05.118 ********* 2025-06-05 19:34:26.340340 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:26.340454 | orchestrator | 2025-06-05 19:34:26.340631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:26.341253 | orchestrator | Thursday 05 June 2025 19:34:26 +0000 (0:00:00.179) 0:00:05.298 ********* 2025-06-05 19:34:26.520331 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:26.520968 | orchestrator | 2025-06-05 19:34:26.521982 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:26.522951 | orchestrator | Thursday 05 June 2025 19:34:26 +0000 (0:00:00.179) 0:00:05.478 ********* 2025-06-05 19:34:26.709644 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:26.710488 | orchestrator | 2025-06-05 19:34:26.711228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:26.712103 | orchestrator | Thursday 05 June 2025 19:34:26 +0000 (0:00:00.190) 0:00:05.668 ********* 2025-06-05 19:34:26.898913 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:26.899829 | orchestrator | 2025-06-05 19:34:26.900522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:26.901584 | orchestrator | Thursday 05 June 2025 19:34:26 +0000 (0:00:00.188) 0:00:05.857 ********* 2025-06-05 19:34:27.113993 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:27.114219 | orchestrator | 2025-06-05 19:34:27.115826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:27.117226 | orchestrator | Thursday 05 June 2025 19:34:27 +0000 (0:00:00.214) 0:00:06.071 ********* 2025-06-05 19:34:27.300471 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:27.300677 | orchestrator | 2025-06-05 19:34:27.301639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:27.302783 | orchestrator | Thursday 05 June 2025 19:34:27 +0000 (0:00:00.187) 0:00:06.258 ********* 2025-06-05 19:34:27.487782 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:27.488009 | orchestrator | 2025-06-05 19:34:27.490146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:27.490181 | orchestrator | Thursday 05 June 2025 19:34:27 +0000 (0:00:00.185) 0:00:06.444 ********* 2025-06-05 19:34:27.674758 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:27.675017 | orchestrator | 2025-06-05 19:34:27.676163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:27.676997 | orchestrator | Thursday 05 June 2025 19:34:27 +0000 (0:00:00.187) 0:00:06.632 ********* 2025-06-05 19:34:28.659298 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-05 19:34:28.659509 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-05 19:34:28.660553 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-05 19:34:28.661319 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-05 19:34:28.662065 | orchestrator | 2025-06-05 19:34:28.662503 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:28.663391 | orchestrator | Thursday 05 June 2025 19:34:28 +0000 (0:00:00.983) 0:00:07.615 ********* 2025-06-05 19:34:28.872257 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:28.872755 | orchestrator | 2025-06-05 19:34:28.873909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:28.874544 | orchestrator | Thursday 05 June 2025 19:34:28 +0000 (0:00:00.214) 0:00:07.830 ********* 2025-06-05 19:34:29.081543 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:29.081671 | orchestrator | 2025-06-05 19:34:29.081728 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:29.081983 | orchestrator | Thursday 05 June 2025 19:34:29 +0000 (0:00:00.209) 0:00:08.040 ********* 2025-06-05 19:34:29.279152 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:29.279957 | orchestrator | 2025-06-05 19:34:29.280770 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:29.283057 | orchestrator | Thursday 05 June 2025 19:34:29 +0000 (0:00:00.197) 0:00:08.237 ********* 2025-06-05 19:34:29.485584 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:29.485842 | orchestrator | 2025-06-05 19:34:29.488981 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-05 19:34:29.489661 | orchestrator | Thursday 05 June 2025 19:34:29 +0000 (0:00:00.204) 0:00:08.442 ********* 2025-06-05 19:34:29.601706 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:29.602859 | orchestrator | 2025-06-05 19:34:29.603573 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-05 19:34:29.604590 | orchestrator | Thursday 05 June 2025 19:34:29 +0000 (0:00:00.118) 0:00:08.560 ********* 2025-06-05 19:34:29.783854 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f5969faa-081d-5d9e-9303-7a3301cb4b7a'}}) 2025-06-05 19:34:29.784393 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '46c2c746-0272-5326-baff-0a3e04c6e4bf'}}) 2025-06-05 19:34:29.785401 | orchestrator | 2025-06-05 19:34:29.786166 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-05 19:34:29.787102 | orchestrator | Thursday 05 June 2025 19:34:29 +0000 (0:00:00.181) 0:00:08.742 ********* 2025-06-05 19:34:31.762675 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'}) 2025-06-05 19:34:31.762789 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'}) 2025-06-05 19:34:31.762884 | orchestrator | 2025-06-05 19:34:31.762982 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-05 19:34:31.763496 | orchestrator | Thursday 05 June 2025 19:34:31 +0000 (0:00:01.975) 0:00:10.717 ********* 2025-06-05 19:34:31.917064 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:31.917857 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:31.919038 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:31.919965 | orchestrator | 2025-06-05 19:34:31.920547 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-05 19:34:31.921748 | orchestrator | Thursday 05 June 2025 19:34:31 +0000 (0:00:00.156) 0:00:10.874 ********* 2025-06-05 19:34:33.291307 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'}) 2025-06-05 19:34:33.291859 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'}) 2025-06-05 19:34:33.292916 | orchestrator | 2025-06-05 19:34:33.293364 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-05 19:34:33.294131 | orchestrator | Thursday 05 June 2025 19:34:33 +0000 (0:00:01.373) 0:00:12.247 ********* 2025-06-05 19:34:33.422388 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:33.423492 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:33.424800 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:33.425542 | orchestrator | 2025-06-05 19:34:33.426123 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-05 19:34:33.426520 | orchestrator | Thursday 05 June 2025 19:34:33 +0000 (0:00:00.133) 0:00:12.381 ********* 2025-06-05 19:34:33.546627 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:33.546954 | orchestrator | 2025-06-05 19:34:33.547514 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-05 19:34:33.548974 | orchestrator | Thursday 05 June 2025 19:34:33 +0000 (0:00:00.124) 0:00:12.506 ********* 2025-06-05 19:34:33.784187 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:33.784758 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:33.785691 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:33.786723 | orchestrator | 2025-06-05 19:34:33.787826 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-05 19:34:33.788781 | orchestrator | Thursday 05 June 2025 19:34:33 +0000 (0:00:00.237) 0:00:12.743 ********* 2025-06-05 19:34:33.909815 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:33.911004 | orchestrator | 2025-06-05 19:34:33.911774 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-05 19:34:33.914354 | orchestrator | Thursday 05 June 2025 19:34:33 +0000 (0:00:00.124) 0:00:12.868 ********* 2025-06-05 19:34:34.042994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:34.044104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:34.044784 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.045646 | orchestrator | 2025-06-05 19:34:34.046644 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-05 19:34:34.047229 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.133) 0:00:13.002 ********* 2025-06-05 19:34:34.166956 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.167800 | orchestrator | 2025-06-05 19:34:34.168477 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-05 19:34:34.169278 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.124) 0:00:13.126 ********* 2025-06-05 19:34:34.298863 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:34.300578 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:34.301323 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.302223 | orchestrator | 2025-06-05 19:34:34.302734 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-05 19:34:34.303346 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.131) 0:00:13.258 ********* 2025-06-05 19:34:34.413632 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:34.414249 | orchestrator | 2025-06-05 19:34:34.415107 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-05 19:34:34.416195 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.114) 0:00:13.372 ********* 2025-06-05 19:34:34.554117 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:34.554307 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:34.554805 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.555181 | orchestrator | 2025-06-05 19:34:34.555558 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-05 19:34:34.555894 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.141) 0:00:13.514 ********* 2025-06-05 19:34:34.701021 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:34.701816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:34.702445 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.703230 | orchestrator | 2025-06-05 19:34:34.703955 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-05 19:34:34.704755 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.145) 0:00:13.660 ********* 2025-06-05 19:34:34.833108 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:34.833643 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:34.834589 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.835864 | orchestrator | 2025-06-05 19:34:34.836283 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-05 19:34:34.836818 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.130) 0:00:13.791 ********* 2025-06-05 19:34:34.944567 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:34.945339 | orchestrator | 2025-06-05 19:34:34.945959 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-05 19:34:34.946235 | orchestrator | Thursday 05 June 2025 19:34:34 +0000 (0:00:00.112) 0:00:13.904 ********* 2025-06-05 19:34:35.065974 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:35.066190 | orchestrator | 2025-06-05 19:34:35.066212 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-05 19:34:35.066472 | orchestrator | Thursday 05 June 2025 19:34:35 +0000 (0:00:00.121) 0:00:14.026 ********* 2025-06-05 19:34:35.194234 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:35.195775 | orchestrator | 2025-06-05 19:34:35.195818 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-05 19:34:35.196477 | orchestrator | Thursday 05 June 2025 19:34:35 +0000 (0:00:00.126) 0:00:14.153 ********* 2025-06-05 19:34:35.442839 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:34:35.443145 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-05 19:34:35.444701 | orchestrator | } 2025-06-05 19:34:35.445463 | orchestrator | 2025-06-05 19:34:35.446141 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-05 19:34:35.446707 | orchestrator | Thursday 05 June 2025 19:34:35 +0000 (0:00:00.246) 0:00:14.399 ********* 2025-06-05 19:34:35.554690 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:34:35.555756 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-05 19:34:35.556459 | orchestrator | } 2025-06-05 19:34:35.557146 | orchestrator | 2025-06-05 19:34:35.558223 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-05 19:34:35.559044 | orchestrator | Thursday 05 June 2025 19:34:35 +0000 (0:00:00.114) 0:00:14.514 ********* 2025-06-05 19:34:35.680977 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:34:35.682953 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-05 19:34:35.683118 | orchestrator | } 2025-06-05 19:34:35.684234 | orchestrator | 2025-06-05 19:34:35.684849 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-05 19:34:35.685626 | orchestrator | Thursday 05 June 2025 19:34:35 +0000 (0:00:00.125) 0:00:14.639 ********* 2025-06-05 19:34:36.313388 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:36.313498 | orchestrator | 2025-06-05 19:34:36.313509 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-05 19:34:36.313554 | orchestrator | Thursday 05 June 2025 19:34:36 +0000 (0:00:00.631) 0:00:15.271 ********* 2025-06-05 19:34:36.817661 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:36.818441 | orchestrator | 2025-06-05 19:34:36.819506 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-05 19:34:36.820665 | orchestrator | Thursday 05 June 2025 19:34:36 +0000 (0:00:00.505) 0:00:15.776 ********* 2025-06-05 19:34:37.347229 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:37.348084 | orchestrator | 2025-06-05 19:34:37.349259 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-05 19:34:37.349927 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.528) 0:00:16.304 ********* 2025-06-05 19:34:37.479693 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:37.480422 | orchestrator | 2025-06-05 19:34:37.481233 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-05 19:34:37.481793 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.132) 0:00:16.436 ********* 2025-06-05 19:34:37.575785 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:37.576352 | orchestrator | 2025-06-05 19:34:37.578314 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-05 19:34:37.578724 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.098) 0:00:16.535 ********* 2025-06-05 19:34:37.686419 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:37.686769 | orchestrator | 2025-06-05 19:34:37.687323 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-05 19:34:37.688736 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.110) 0:00:16.645 ********* 2025-06-05 19:34:37.836914 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:34:37.837124 | orchestrator |  "vgs_report": { 2025-06-05 19:34:37.837407 | orchestrator |  "vg": [] 2025-06-05 19:34:37.838393 | orchestrator |  } 2025-06-05 19:34:37.839117 | orchestrator | } 2025-06-05 19:34:37.839859 | orchestrator | 2025-06-05 19:34:37.841380 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-05 19:34:37.841676 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.150) 0:00:16.796 ********* 2025-06-05 19:34:37.972975 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:37.973082 | orchestrator | 2025-06-05 19:34:37.973117 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-05 19:34:37.974555 | orchestrator | Thursday 05 June 2025 19:34:37 +0000 (0:00:00.134) 0:00:16.931 ********* 2025-06-05 19:34:38.100484 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.101013 | orchestrator | 2025-06-05 19:34:38.101740 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-05 19:34:38.102138 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.128) 0:00:17.059 ********* 2025-06-05 19:34:38.419263 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.420279 | orchestrator | 2025-06-05 19:34:38.421834 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-05 19:34:38.423525 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.317) 0:00:17.376 ********* 2025-06-05 19:34:38.552870 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.554463 | orchestrator | 2025-06-05 19:34:38.555432 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-05 19:34:38.556822 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.133) 0:00:17.510 ********* 2025-06-05 19:34:38.688861 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.689707 | orchestrator | 2025-06-05 19:34:38.690134 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-05 19:34:38.691297 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.136) 0:00:17.647 ********* 2025-06-05 19:34:38.836176 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.837305 | orchestrator | 2025-06-05 19:34:38.838833 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-05 19:34:38.840487 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.146) 0:00:17.794 ********* 2025-06-05 19:34:38.979010 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:38.979098 | orchestrator | 2025-06-05 19:34:38.980306 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-05 19:34:38.980763 | orchestrator | Thursday 05 June 2025 19:34:38 +0000 (0:00:00.141) 0:00:17.935 ********* 2025-06-05 19:34:39.125184 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.125288 | orchestrator | 2025-06-05 19:34:39.126244 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-05 19:34:39.128015 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.146) 0:00:18.082 ********* 2025-06-05 19:34:39.251064 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.251890 | orchestrator | 2025-06-05 19:34:39.253276 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-05 19:34:39.254767 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.125) 0:00:18.207 ********* 2025-06-05 19:34:39.373280 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.374165 | orchestrator | 2025-06-05 19:34:39.375513 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-05 19:34:39.375719 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.124) 0:00:18.332 ********* 2025-06-05 19:34:39.508994 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.510117 | orchestrator | 2025-06-05 19:34:39.511052 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-05 19:34:39.512030 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.134) 0:00:18.467 ********* 2025-06-05 19:34:39.642332 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.642529 | orchestrator | 2025-06-05 19:34:39.642689 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-05 19:34:39.643129 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.134) 0:00:18.601 ********* 2025-06-05 19:34:39.766289 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.766383 | orchestrator | 2025-06-05 19:34:39.766713 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-05 19:34:39.767119 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.123) 0:00:18.725 ********* 2025-06-05 19:34:39.887262 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:39.888620 | orchestrator | 2025-06-05 19:34:39.889207 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-05 19:34:39.889999 | orchestrator | Thursday 05 June 2025 19:34:39 +0000 (0:00:00.120) 0:00:18.845 ********* 2025-06-05 19:34:40.033049 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.033854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.034661 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.036069 | orchestrator | 2025-06-05 19:34:40.036814 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-05 19:34:40.037579 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.145) 0:00:18.990 ********* 2025-06-05 19:34:40.374296 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.374504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.375353 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.376192 | orchestrator | 2025-06-05 19:34:40.376735 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-05 19:34:40.378278 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.342) 0:00:19.333 ********* 2025-06-05 19:34:40.528896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.528995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.529138 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.529616 | orchestrator | 2025-06-05 19:34:40.530117 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-05 19:34:40.530356 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.152) 0:00:19.485 ********* 2025-06-05 19:34:40.679663 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.680215 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.681233 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.683177 | orchestrator | 2025-06-05 19:34:40.683267 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-05 19:34:40.684041 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.152) 0:00:19.638 ********* 2025-06-05 19:34:40.834333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.835199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.836617 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.838069 | orchestrator | 2025-06-05 19:34:40.839315 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-05 19:34:40.840358 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.154) 0:00:19.792 ********* 2025-06-05 19:34:40.992995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:40.993100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:40.993700 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:40.995074 | orchestrator | 2025-06-05 19:34:40.996714 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-05 19:34:40.997766 | orchestrator | Thursday 05 June 2025 19:34:40 +0000 (0:00:00.156) 0:00:19.949 ********* 2025-06-05 19:34:41.138100 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:41.139390 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:41.140432 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:41.141865 | orchestrator | 2025-06-05 19:34:41.142007 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-05 19:34:41.142848 | orchestrator | Thursday 05 June 2025 19:34:41 +0000 (0:00:00.147) 0:00:20.096 ********* 2025-06-05 19:34:41.288936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:41.289120 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:41.290003 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:41.291407 | orchestrator | 2025-06-05 19:34:41.292675 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-05 19:34:41.292714 | orchestrator | Thursday 05 June 2025 19:34:41 +0000 (0:00:00.151) 0:00:20.247 ********* 2025-06-05 19:34:41.798511 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:41.798700 | orchestrator | 2025-06-05 19:34:41.798787 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-05 19:34:41.798917 | orchestrator | Thursday 05 June 2025 19:34:41 +0000 (0:00:00.508) 0:00:20.756 ********* 2025-06-05 19:34:42.281295 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:42.281984 | orchestrator | 2025-06-05 19:34:42.283025 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-05 19:34:42.283518 | orchestrator | Thursday 05 June 2025 19:34:42 +0000 (0:00:00.482) 0:00:21.238 ********* 2025-06-05 19:34:42.424979 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:34:42.425761 | orchestrator | 2025-06-05 19:34:42.426165 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-05 19:34:42.427512 | orchestrator | Thursday 05 June 2025 19:34:42 +0000 (0:00:00.144) 0:00:21.383 ********* 2025-06-05 19:34:42.587166 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'vg_name': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'}) 2025-06-05 19:34:42.589256 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'vg_name': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'}) 2025-06-05 19:34:42.589389 | orchestrator | 2025-06-05 19:34:42.590184 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-05 19:34:42.591327 | orchestrator | Thursday 05 June 2025 19:34:42 +0000 (0:00:00.162) 0:00:21.546 ********* 2025-06-05 19:34:42.750172 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:42.750345 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:42.752574 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:42.752662 | orchestrator | 2025-06-05 19:34:42.752676 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-05 19:34:42.752690 | orchestrator | Thursday 05 June 2025 19:34:42 +0000 (0:00:00.163) 0:00:21.709 ********* 2025-06-05 19:34:43.065694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:43.067091 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:43.069159 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:43.070072 | orchestrator | 2025-06-05 19:34:43.071510 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-05 19:34:43.072331 | orchestrator | Thursday 05 June 2025 19:34:43 +0000 (0:00:00.313) 0:00:22.022 ********* 2025-06-05 19:34:43.228227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'})  2025-06-05 19:34:43.229534 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'})  2025-06-05 19:34:43.230373 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:34:43.231791 | orchestrator | 2025-06-05 19:34:43.232664 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-05 19:34:43.233952 | orchestrator | Thursday 05 June 2025 19:34:43 +0000 (0:00:00.161) 0:00:22.184 ********* 2025-06-05 19:34:43.483740 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:34:43.484963 | orchestrator |  "lvm_report": { 2025-06-05 19:34:43.486235 | orchestrator |  "lv": [ 2025-06-05 19:34:43.488727 | orchestrator |  { 2025-06-05 19:34:43.489554 | orchestrator |  "lv_name": "osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf", 2025-06-05 19:34:43.490159 | orchestrator |  "vg_name": "ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf" 2025-06-05 19:34:43.490687 | orchestrator |  }, 2025-06-05 19:34:43.491354 | orchestrator |  { 2025-06-05 19:34:43.491946 | orchestrator |  "lv_name": "osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a", 2025-06-05 19:34:43.492351 | orchestrator |  "vg_name": "ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a" 2025-06-05 19:34:43.493204 | orchestrator |  } 2025-06-05 19:34:43.493572 | orchestrator |  ], 2025-06-05 19:34:43.494265 | orchestrator |  "pv": [ 2025-06-05 19:34:43.494735 | orchestrator |  { 2025-06-05 19:34:43.495434 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-05 19:34:43.496179 | orchestrator |  "vg_name": "ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a" 2025-06-05 19:34:43.497004 | orchestrator |  }, 2025-06-05 19:34:43.497263 | orchestrator |  { 2025-06-05 19:34:43.497670 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-05 19:34:43.498087 | orchestrator |  "vg_name": "ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf" 2025-06-05 19:34:43.498358 | orchestrator |  } 2025-06-05 19:34:43.499703 | orchestrator |  ] 2025-06-05 19:34:43.500768 | orchestrator |  } 2025-06-05 19:34:43.501510 | orchestrator | } 2025-06-05 19:34:43.502107 | orchestrator | 2025-06-05 19:34:43.503042 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-05 19:34:43.503194 | orchestrator | 2025-06-05 19:34:43.503636 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:34:43.503936 | orchestrator | Thursday 05 June 2025 19:34:43 +0000 (0:00:00.258) 0:00:22.442 ********* 2025-06-05 19:34:43.720957 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-05 19:34:43.721623 | orchestrator | 2025-06-05 19:34:43.722322 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:34:43.722791 | orchestrator | Thursday 05 June 2025 19:34:43 +0000 (0:00:00.237) 0:00:22.679 ********* 2025-06-05 19:34:43.947999 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:34:43.948476 | orchestrator | 2025-06-05 19:34:43.949314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:43.950106 | orchestrator | Thursday 05 June 2025 19:34:43 +0000 (0:00:00.226) 0:00:22.906 ********* 2025-06-05 19:34:44.347573 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-05 19:34:44.349311 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-05 19:34:44.350867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-05 19:34:44.351656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-05 19:34:44.352556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-05 19:34:44.353440 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-05 19:34:44.354062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-05 19:34:44.354619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-05 19:34:44.355163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-05 19:34:44.355718 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-05 19:34:44.356208 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-05 19:34:44.356697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-05 19:34:44.357923 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-05 19:34:44.358102 | orchestrator | 2025-06-05 19:34:44.358990 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:44.359622 | orchestrator | Thursday 05 June 2025 19:34:44 +0000 (0:00:00.398) 0:00:23.305 ********* 2025-06-05 19:34:44.557530 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:44.557795 | orchestrator | 2025-06-05 19:34:44.558435 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:44.559469 | orchestrator | Thursday 05 June 2025 19:34:44 +0000 (0:00:00.210) 0:00:23.515 ********* 2025-06-05 19:34:44.754424 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:44.756619 | orchestrator | 2025-06-05 19:34:44.759993 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:44.761491 | orchestrator | Thursday 05 June 2025 19:34:44 +0000 (0:00:00.197) 0:00:23.713 ********* 2025-06-05 19:34:44.943789 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:44.944885 | orchestrator | 2025-06-05 19:34:44.945573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:44.946636 | orchestrator | Thursday 05 June 2025 19:34:44 +0000 (0:00:00.189) 0:00:23.902 ********* 2025-06-05 19:34:45.535436 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:45.536016 | orchestrator | 2025-06-05 19:34:45.536937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:45.537959 | orchestrator | Thursday 05 June 2025 19:34:45 +0000 (0:00:00.589) 0:00:24.492 ********* 2025-06-05 19:34:45.732045 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:45.733009 | orchestrator | 2025-06-05 19:34:45.734126 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:45.735503 | orchestrator | Thursday 05 June 2025 19:34:45 +0000 (0:00:00.196) 0:00:24.689 ********* 2025-06-05 19:34:45.918634 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:45.919372 | orchestrator | 2025-06-05 19:34:45.920202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:45.921529 | orchestrator | Thursday 05 June 2025 19:34:45 +0000 (0:00:00.188) 0:00:24.877 ********* 2025-06-05 19:34:46.123870 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:46.125317 | orchestrator | 2025-06-05 19:34:46.126325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:46.127505 | orchestrator | Thursday 05 June 2025 19:34:46 +0000 (0:00:00.204) 0:00:25.081 ********* 2025-06-05 19:34:46.306722 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:46.307316 | orchestrator | 2025-06-05 19:34:46.309538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:46.310770 | orchestrator | Thursday 05 June 2025 19:34:46 +0000 (0:00:00.183) 0:00:25.265 ********* 2025-06-05 19:34:46.712159 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9) 2025-06-05 19:34:46.712833 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9) 2025-06-05 19:34:46.714358 | orchestrator | 2025-06-05 19:34:46.716552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:46.717683 | orchestrator | Thursday 05 June 2025 19:34:46 +0000 (0:00:00.405) 0:00:25.670 ********* 2025-06-05 19:34:47.129133 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25) 2025-06-05 19:34:47.129811 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25) 2025-06-05 19:34:47.131720 | orchestrator | 2025-06-05 19:34:47.132549 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:47.132739 | orchestrator | Thursday 05 June 2025 19:34:47 +0000 (0:00:00.415) 0:00:26.086 ********* 2025-06-05 19:34:47.619228 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f) 2025-06-05 19:34:47.619838 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f) 2025-06-05 19:34:47.620463 | orchestrator | 2025-06-05 19:34:47.621153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:47.622002 | orchestrator | Thursday 05 June 2025 19:34:47 +0000 (0:00:00.491) 0:00:26.578 ********* 2025-06-05 19:34:48.052089 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2) 2025-06-05 19:34:48.052352 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2) 2025-06-05 19:34:48.052447 | orchestrator | 2025-06-05 19:34:48.052976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:34:48.053291 | orchestrator | Thursday 05 June 2025 19:34:48 +0000 (0:00:00.433) 0:00:27.011 ********* 2025-06-05 19:34:48.380420 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:34:48.380771 | orchestrator | 2025-06-05 19:34:48.383497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:48.384426 | orchestrator | Thursday 05 June 2025 19:34:48 +0000 (0:00:00.327) 0:00:27.339 ********* 2025-06-05 19:34:48.974444 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-05 19:34:48.975299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-05 19:34:48.975974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-05 19:34:48.977153 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-05 19:34:48.978123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-05 19:34:48.978700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-05 19:34:48.979510 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-05 19:34:48.979970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-05 19:34:48.980472 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-05 19:34:48.980905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-05 19:34:48.981617 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-05 19:34:48.982458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-05 19:34:48.983202 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-05 19:34:48.983808 | orchestrator | 2025-06-05 19:34:48.984755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:48.985062 | orchestrator | Thursday 05 June 2025 19:34:48 +0000 (0:00:00.594) 0:00:27.933 ********* 2025-06-05 19:34:49.169707 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:49.170216 | orchestrator | 2025-06-05 19:34:49.170767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:49.171754 | orchestrator | Thursday 05 June 2025 19:34:49 +0000 (0:00:00.194) 0:00:28.127 ********* 2025-06-05 19:34:49.367947 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:49.368511 | orchestrator | 2025-06-05 19:34:49.368924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:49.369892 | orchestrator | Thursday 05 June 2025 19:34:49 +0000 (0:00:00.199) 0:00:28.327 ********* 2025-06-05 19:34:49.568088 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:49.568823 | orchestrator | 2025-06-05 19:34:49.569486 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:49.570228 | orchestrator | Thursday 05 June 2025 19:34:49 +0000 (0:00:00.200) 0:00:28.527 ********* 2025-06-05 19:34:49.751441 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:49.751858 | orchestrator | 2025-06-05 19:34:49.753126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:49.753934 | orchestrator | Thursday 05 June 2025 19:34:49 +0000 (0:00:00.183) 0:00:28.710 ********* 2025-06-05 19:34:49.938521 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:49.939236 | orchestrator | 2025-06-05 19:34:49.940206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:49.941145 | orchestrator | Thursday 05 June 2025 19:34:49 +0000 (0:00:00.186) 0:00:28.897 ********* 2025-06-05 19:34:50.141699 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:50.141916 | orchestrator | 2025-06-05 19:34:50.141938 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:50.143712 | orchestrator | Thursday 05 June 2025 19:34:50 +0000 (0:00:00.201) 0:00:29.098 ********* 2025-06-05 19:34:50.337830 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:50.338881 | orchestrator | 2025-06-05 19:34:50.339088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:50.339981 | orchestrator | Thursday 05 June 2025 19:34:50 +0000 (0:00:00.198) 0:00:29.296 ********* 2025-06-05 19:34:50.538327 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:50.538923 | orchestrator | 2025-06-05 19:34:50.540919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:50.541754 | orchestrator | Thursday 05 June 2025 19:34:50 +0000 (0:00:00.200) 0:00:29.497 ********* 2025-06-05 19:34:51.338302 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-05 19:34:51.340219 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-05 19:34:51.340721 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-05 19:34:51.341732 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-05 19:34:51.343195 | orchestrator | 2025-06-05 19:34:51.343939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:51.344903 | orchestrator | Thursday 05 June 2025 19:34:51 +0000 (0:00:00.797) 0:00:30.294 ********* 2025-06-05 19:34:51.530556 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:51.531257 | orchestrator | 2025-06-05 19:34:51.532257 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:51.533642 | orchestrator | Thursday 05 June 2025 19:34:51 +0000 (0:00:00.194) 0:00:30.489 ********* 2025-06-05 19:34:51.721866 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:51.722389 | orchestrator | 2025-06-05 19:34:51.723434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:51.724252 | orchestrator | Thursday 05 June 2025 19:34:51 +0000 (0:00:00.191) 0:00:30.680 ********* 2025-06-05 19:34:52.344065 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:52.344317 | orchestrator | 2025-06-05 19:34:52.346618 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:34:52.346876 | orchestrator | Thursday 05 June 2025 19:34:52 +0000 (0:00:00.621) 0:00:31.302 ********* 2025-06-05 19:34:52.543700 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:52.544171 | orchestrator | 2025-06-05 19:34:52.545572 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-05 19:34:52.547226 | orchestrator | Thursday 05 June 2025 19:34:52 +0000 (0:00:00.199) 0:00:31.501 ********* 2025-06-05 19:34:52.674490 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:52.675109 | orchestrator | 2025-06-05 19:34:52.675660 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-05 19:34:52.676470 | orchestrator | Thursday 05 June 2025 19:34:52 +0000 (0:00:00.130) 0:00:31.632 ********* 2025-06-05 19:34:52.858676 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '9f7f7c2a-d649-5a85-84b6-7657bf908d98'}}) 2025-06-05 19:34:52.859609 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '67c48ddb-095b-5044-89f7-89f2250f1a91'}}) 2025-06-05 19:34:52.860391 | orchestrator | 2025-06-05 19:34:52.861302 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-05 19:34:52.862215 | orchestrator | Thursday 05 June 2025 19:34:52 +0000 (0:00:00.184) 0:00:31.817 ********* 2025-06-05 19:34:54.712859 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'}) 2025-06-05 19:34:54.713768 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'}) 2025-06-05 19:34:54.714473 | orchestrator | 2025-06-05 19:34:54.715180 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-05 19:34:54.716683 | orchestrator | Thursday 05 June 2025 19:34:54 +0000 (0:00:01.853) 0:00:33.671 ********* 2025-06-05 19:34:54.845193 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:54.845625 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:54.846357 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:54.847569 | orchestrator | 2025-06-05 19:34:54.848625 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-05 19:34:54.849156 | orchestrator | Thursday 05 June 2025 19:34:54 +0000 (0:00:00.133) 0:00:33.804 ********* 2025-06-05 19:34:56.177151 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'}) 2025-06-05 19:34:56.178607 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'}) 2025-06-05 19:34:56.180344 | orchestrator | 2025-06-05 19:34:56.181240 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-05 19:34:56.182109 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:01.330) 0:00:35.135 ********* 2025-06-05 19:34:56.311702 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:56.313108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:56.313750 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:56.314913 | orchestrator | 2025-06-05 19:34:56.315613 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-05 19:34:56.316122 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:00.135) 0:00:35.270 ********* 2025-06-05 19:34:56.434208 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:56.434290 | orchestrator | 2025-06-05 19:34:56.434763 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-05 19:34:56.435186 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:00.120) 0:00:35.390 ********* 2025-06-05 19:34:56.566785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:56.567263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:56.569183 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:56.570419 | orchestrator | 2025-06-05 19:34:56.571116 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-05 19:34:56.571910 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:00.135) 0:00:35.526 ********* 2025-06-05 19:34:56.688618 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:56.689072 | orchestrator | 2025-06-05 19:34:56.689988 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-05 19:34:56.691303 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:00.122) 0:00:35.648 ********* 2025-06-05 19:34:56.802954 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:56.804359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:56.805299 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:56.806254 | orchestrator | 2025-06-05 19:34:56.807432 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-05 19:34:56.807730 | orchestrator | Thursday 05 June 2025 19:34:56 +0000 (0:00:00.113) 0:00:35.761 ********* 2025-06-05 19:34:57.031342 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.031426 | orchestrator | 2025-06-05 19:34:57.032015 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-05 19:34:57.032470 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.228) 0:00:35.990 ********* 2025-06-05 19:34:57.169020 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:57.170056 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:57.171783 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.172250 | orchestrator | 2025-06-05 19:34:57.173054 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-05 19:34:57.173852 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.137) 0:00:36.128 ********* 2025-06-05 19:34:57.277483 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:34:57.278167 | orchestrator | 2025-06-05 19:34:57.279272 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-05 19:34:57.279673 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.108) 0:00:36.237 ********* 2025-06-05 19:34:57.418992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:57.419256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:57.420103 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.420865 | orchestrator | 2025-06-05 19:34:57.421438 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-05 19:34:57.422131 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.140) 0:00:36.378 ********* 2025-06-05 19:34:57.554734 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:57.555660 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:57.556088 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.556833 | orchestrator | 2025-06-05 19:34:57.557343 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-05 19:34:57.558161 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.134) 0:00:36.513 ********* 2025-06-05 19:34:57.683739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:34:57.683836 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:34:57.684361 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.684693 | orchestrator | 2025-06-05 19:34:57.685268 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-05 19:34:57.685547 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.130) 0:00:36.644 ********* 2025-06-05 19:34:57.806143 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.808145 | orchestrator | 2025-06-05 19:34:57.808173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-05 19:34:57.810607 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.120) 0:00:36.765 ********* 2025-06-05 19:34:57.925873 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:57.926735 | orchestrator | 2025-06-05 19:34:57.927501 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-05 19:34:57.930774 | orchestrator | Thursday 05 June 2025 19:34:57 +0000 (0:00:00.120) 0:00:36.886 ********* 2025-06-05 19:34:58.047847 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:34:58.048197 | orchestrator | 2025-06-05 19:34:58.048985 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-05 19:34:58.049711 | orchestrator | Thursday 05 June 2025 19:34:58 +0000 (0:00:00.121) 0:00:37.007 ********* 2025-06-05 19:34:58.165519 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:34:58.166089 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-05 19:34:58.166968 | orchestrator | } 2025-06-05 19:34:58.167519 | orchestrator | 2025-06-05 19:34:58.168302 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-05 19:34:58.168721 | orchestrator | Thursday 05 June 2025 19:34:58 +0000 (0:00:00.116) 0:00:37.123 ********* 2025-06-05 19:34:58.277982 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:34:58.278233 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-05 19:34:58.278905 | orchestrator | } 2025-06-05 19:34:58.279376 | orchestrator | 2025-06-05 19:34:58.280017 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-05 19:34:58.280531 | orchestrator | Thursday 05 June 2025 19:34:58 +0000 (0:00:00.114) 0:00:37.237 ********* 2025-06-05 19:34:58.389419 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:34:58.390118 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-05 19:34:58.391042 | orchestrator | } 2025-06-05 19:34:58.391786 | orchestrator | 2025-06-05 19:34:58.392404 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-05 19:34:58.393141 | orchestrator | Thursday 05 June 2025 19:34:58 +0000 (0:00:00.111) 0:00:37.349 ********* 2025-06-05 19:34:58.982937 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:34:58.983367 | orchestrator | 2025-06-05 19:34:58.984244 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-05 19:34:58.984995 | orchestrator | Thursday 05 June 2025 19:34:58 +0000 (0:00:00.591) 0:00:37.940 ********* 2025-06-05 19:34:59.503768 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:34:59.503894 | orchestrator | 2025-06-05 19:34:59.504682 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-05 19:34:59.505321 | orchestrator | Thursday 05 June 2025 19:34:59 +0000 (0:00:00.522) 0:00:38.463 ********* 2025-06-05 19:35:00.017938 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:00.018684 | orchestrator | 2025-06-05 19:35:00.019120 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-05 19:35:00.020002 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.512) 0:00:38.975 ********* 2025-06-05 19:35:00.167192 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:00.167611 | orchestrator | 2025-06-05 19:35:00.168366 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-05 19:35:00.169274 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.150) 0:00:39.125 ********* 2025-06-05 19:35:00.288103 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:00.289050 | orchestrator | 2025-06-05 19:35:00.289508 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-05 19:35:00.290237 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.120) 0:00:39.245 ********* 2025-06-05 19:35:00.404313 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:00.404446 | orchestrator | 2025-06-05 19:35:00.404775 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-05 19:35:00.405813 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.116) 0:00:39.362 ********* 2025-06-05 19:35:00.547913 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:35:00.548638 | orchestrator |  "vgs_report": { 2025-06-05 19:35:00.549637 | orchestrator |  "vg": [] 2025-06-05 19:35:00.550644 | orchestrator |  } 2025-06-05 19:35:00.551450 | orchestrator | } 2025-06-05 19:35:00.551992 | orchestrator | 2025-06-05 19:35:00.552853 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-05 19:35:00.553522 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.143) 0:00:39.506 ********* 2025-06-05 19:35:00.687649 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:00.688272 | orchestrator | 2025-06-05 19:35:00.689709 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-05 19:35:00.689736 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.137) 0:00:39.643 ********* 2025-06-05 19:35:00.808772 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:00.809355 | orchestrator | 2025-06-05 19:35:00.810286 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-05 19:35:00.811267 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.123) 0:00:39.767 ********* 2025-06-05 19:35:00.934299 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:00.934498 | orchestrator | 2025-06-05 19:35:00.935312 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-05 19:35:00.936120 | orchestrator | Thursday 05 June 2025 19:35:00 +0000 (0:00:00.125) 0:00:39.893 ********* 2025-06-05 19:35:01.073064 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.073175 | orchestrator | 2025-06-05 19:35:01.073376 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-05 19:35:01.074127 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.138) 0:00:40.031 ********* 2025-06-05 19:35:01.195156 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.195772 | orchestrator | 2025-06-05 19:35:01.196965 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-05 19:35:01.197657 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.122) 0:00:40.154 ********* 2025-06-05 19:35:01.518988 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.519164 | orchestrator | 2025-06-05 19:35:01.521301 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-05 19:35:01.521967 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.321) 0:00:40.475 ********* 2025-06-05 19:35:01.650503 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.651541 | orchestrator | 2025-06-05 19:35:01.652322 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-05 19:35:01.653723 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.133) 0:00:40.609 ********* 2025-06-05 19:35:01.783771 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.784365 | orchestrator | 2025-06-05 19:35:01.785251 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-05 19:35:01.786771 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.132) 0:00:40.741 ********* 2025-06-05 19:35:01.921673 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:01.922629 | orchestrator | 2025-06-05 19:35:01.923273 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-05 19:35:01.924506 | orchestrator | Thursday 05 June 2025 19:35:01 +0000 (0:00:00.137) 0:00:40.879 ********* 2025-06-05 19:35:02.059094 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.060896 | orchestrator | 2025-06-05 19:35:02.060998 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-05 19:35:02.061224 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.138) 0:00:41.018 ********* 2025-06-05 19:35:02.194803 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.195201 | orchestrator | 2025-06-05 19:35:02.196020 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-05 19:35:02.196964 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.135) 0:00:41.154 ********* 2025-06-05 19:35:02.325125 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.325688 | orchestrator | 2025-06-05 19:35:02.326839 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-05 19:35:02.327969 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.130) 0:00:41.284 ********* 2025-06-05 19:35:02.469899 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.471715 | orchestrator | 2025-06-05 19:35:02.472777 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-05 19:35:02.473518 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.143) 0:00:41.428 ********* 2025-06-05 19:35:02.617981 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.618676 | orchestrator | 2025-06-05 19:35:02.618972 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-05 19:35:02.620303 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.149) 0:00:41.577 ********* 2025-06-05 19:35:02.764712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:02.764895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:02.765798 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.766537 | orchestrator | 2025-06-05 19:35:02.767370 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-05 19:35:02.767944 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.145) 0:00:41.722 ********* 2025-06-05 19:35:02.919019 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:02.920292 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:02.921278 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:02.921976 | orchestrator | 2025-06-05 19:35:02.922664 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-05 19:35:02.923380 | orchestrator | Thursday 05 June 2025 19:35:02 +0000 (0:00:00.153) 0:00:41.876 ********* 2025-06-05 19:35:03.064561 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:03.064814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:03.065383 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:03.065942 | orchestrator | 2025-06-05 19:35:03.066531 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-05 19:35:03.067047 | orchestrator | Thursday 05 June 2025 19:35:03 +0000 (0:00:00.146) 0:00:42.023 ********* 2025-06-05 19:35:03.407308 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:03.407706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:03.409140 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:03.409719 | orchestrator | 2025-06-05 19:35:03.410204 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-05 19:35:03.410842 | orchestrator | Thursday 05 June 2025 19:35:03 +0000 (0:00:00.342) 0:00:42.366 ********* 2025-06-05 19:35:03.566917 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:03.567666 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:03.568526 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:03.570990 | orchestrator | 2025-06-05 19:35:03.571469 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-05 19:35:03.571725 | orchestrator | Thursday 05 June 2025 19:35:03 +0000 (0:00:00.159) 0:00:42.525 ********* 2025-06-05 19:35:03.717997 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:03.718557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:03.719325 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:03.720134 | orchestrator | 2025-06-05 19:35:03.721131 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-05 19:35:03.721325 | orchestrator | Thursday 05 June 2025 19:35:03 +0000 (0:00:00.150) 0:00:42.676 ********* 2025-06-05 19:35:03.879266 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:03.880816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:03.881942 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:03.883455 | orchestrator | 2025-06-05 19:35:03.884764 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-05 19:35:03.885705 | orchestrator | Thursday 05 June 2025 19:35:03 +0000 (0:00:00.161) 0:00:42.837 ********* 2025-06-05 19:35:04.032902 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:04.033103 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:04.034210 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:04.034964 | orchestrator | 2025-06-05 19:35:04.035524 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-05 19:35:04.036845 | orchestrator | Thursday 05 June 2025 19:35:04 +0000 (0:00:00.153) 0:00:42.991 ********* 2025-06-05 19:35:04.532252 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:04.533214 | orchestrator | 2025-06-05 19:35:04.534364 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-05 19:35:04.536203 | orchestrator | Thursday 05 June 2025 19:35:04 +0000 (0:00:00.499) 0:00:43.490 ********* 2025-06-05 19:35:05.029075 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:05.029302 | orchestrator | 2025-06-05 19:35:05.029616 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-05 19:35:05.030451 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.495) 0:00:43.986 ********* 2025-06-05 19:35:05.162468 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:05.162602 | orchestrator | 2025-06-05 19:35:05.163003 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-05 19:35:05.163606 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.135) 0:00:44.122 ********* 2025-06-05 19:35:05.334997 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'vg_name': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'}) 2025-06-05 19:35:05.335312 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'vg_name': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'}) 2025-06-05 19:35:05.336047 | orchestrator | 2025-06-05 19:35:05.336797 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-05 19:35:05.337536 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.172) 0:00:44.294 ********* 2025-06-05 19:35:05.477074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:05.477351 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:05.478097 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:05.478937 | orchestrator | 2025-06-05 19:35:05.479413 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-05 19:35:05.481183 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.141) 0:00:44.436 ********* 2025-06-05 19:35:05.622338 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:05.622531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:05.622767 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:05.623621 | orchestrator | 2025-06-05 19:35:05.624025 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-05 19:35:05.624608 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.145) 0:00:44.581 ********* 2025-06-05 19:35:05.775529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'})  2025-06-05 19:35:05.775790 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'})  2025-06-05 19:35:05.777782 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:05.779194 | orchestrator | 2025-06-05 19:35:05.779508 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-05 19:35:05.780273 | orchestrator | Thursday 05 June 2025 19:35:05 +0000 (0:00:00.152) 0:00:44.733 ********* 2025-06-05 19:35:06.287382 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:35:06.287486 | orchestrator |  "lvm_report": { 2025-06-05 19:35:06.291619 | orchestrator |  "lv": [ 2025-06-05 19:35:06.292147 | orchestrator |  { 2025-06-05 19:35:06.292952 | orchestrator |  "lv_name": "osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91", 2025-06-05 19:35:06.293254 | orchestrator |  "vg_name": "ceph-67c48ddb-095b-5044-89f7-89f2250f1a91" 2025-06-05 19:35:06.293663 | orchestrator |  }, 2025-06-05 19:35:06.294469 | orchestrator |  { 2025-06-05 19:35:06.295085 | orchestrator |  "lv_name": "osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98", 2025-06-05 19:35:06.295311 | orchestrator |  "vg_name": "ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98" 2025-06-05 19:35:06.295981 | orchestrator |  } 2025-06-05 19:35:06.296341 | orchestrator |  ], 2025-06-05 19:35:06.296820 | orchestrator |  "pv": [ 2025-06-05 19:35:06.297427 | orchestrator |  { 2025-06-05 19:35:06.297764 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-05 19:35:06.298380 | orchestrator |  "vg_name": "ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98" 2025-06-05 19:35:06.298720 | orchestrator |  }, 2025-06-05 19:35:06.299147 | orchestrator |  { 2025-06-05 19:35:06.299809 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-05 19:35:06.300086 | orchestrator |  "vg_name": "ceph-67c48ddb-095b-5044-89f7-89f2250f1a91" 2025-06-05 19:35:06.300466 | orchestrator |  } 2025-06-05 19:35:06.300851 | orchestrator |  ] 2025-06-05 19:35:06.301481 | orchestrator |  } 2025-06-05 19:35:06.301752 | orchestrator | } 2025-06-05 19:35:06.302227 | orchestrator | 2025-06-05 19:35:06.302613 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-05 19:35:06.302976 | orchestrator | 2025-06-05 19:35:06.303491 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 19:35:06.303702 | orchestrator | Thursday 05 June 2025 19:35:06 +0000 (0:00:00.511) 0:00:45.244 ********* 2025-06-05 19:35:06.528288 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-05 19:35:06.528500 | orchestrator | 2025-06-05 19:35:06.529958 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-05 19:35:06.530805 | orchestrator | Thursday 05 June 2025 19:35:06 +0000 (0:00:00.242) 0:00:45.487 ********* 2025-06-05 19:35:06.751018 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:06.751535 | orchestrator | 2025-06-05 19:35:06.753443 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:06.754176 | orchestrator | Thursday 05 June 2025 19:35:06 +0000 (0:00:00.222) 0:00:45.709 ********* 2025-06-05 19:35:07.134153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-05 19:35:07.136790 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-05 19:35:07.137706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-05 19:35:07.138381 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-05 19:35:07.139418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-05 19:35:07.140248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-05 19:35:07.140671 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-05 19:35:07.141403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-05 19:35:07.142672 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-05 19:35:07.143501 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-05 19:35:07.143946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-05 19:35:07.144240 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-05 19:35:07.144726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-05 19:35:07.145788 | orchestrator | 2025-06-05 19:35:07.146299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:07.147010 | orchestrator | Thursday 05 June 2025 19:35:07 +0000 (0:00:00.380) 0:00:46.090 ********* 2025-06-05 19:35:07.330007 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:07.330168 | orchestrator | 2025-06-05 19:35:07.331044 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:07.331474 | orchestrator | Thursday 05 June 2025 19:35:07 +0000 (0:00:00.198) 0:00:46.289 ********* 2025-06-05 19:35:07.526963 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:07.527099 | orchestrator | 2025-06-05 19:35:07.528699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:07.529447 | orchestrator | Thursday 05 June 2025 19:35:07 +0000 (0:00:00.195) 0:00:46.485 ********* 2025-06-05 19:35:07.726687 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:07.727281 | orchestrator | 2025-06-05 19:35:07.728526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:07.729762 | orchestrator | Thursday 05 June 2025 19:35:07 +0000 (0:00:00.200) 0:00:46.685 ********* 2025-06-05 19:35:07.922440 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:07.923082 | orchestrator | 2025-06-05 19:35:07.923877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:07.924717 | orchestrator | Thursday 05 June 2025 19:35:07 +0000 (0:00:00.195) 0:00:46.881 ********* 2025-06-05 19:35:08.115969 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:08.116297 | orchestrator | 2025-06-05 19:35:08.117999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:08.118084 | orchestrator | Thursday 05 June 2025 19:35:08 +0000 (0:00:00.191) 0:00:47.072 ********* 2025-06-05 19:35:08.699840 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:08.701391 | orchestrator | 2025-06-05 19:35:08.701644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:08.703080 | orchestrator | Thursday 05 June 2025 19:35:08 +0000 (0:00:00.582) 0:00:47.655 ********* 2025-06-05 19:35:08.893438 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:08.893537 | orchestrator | 2025-06-05 19:35:08.894368 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:08.895473 | orchestrator | Thursday 05 June 2025 19:35:08 +0000 (0:00:00.196) 0:00:47.851 ********* 2025-06-05 19:35:09.091234 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:09.092134 | orchestrator | 2025-06-05 19:35:09.093466 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:09.094544 | orchestrator | Thursday 05 June 2025 19:35:09 +0000 (0:00:00.197) 0:00:48.049 ********* 2025-06-05 19:35:09.495553 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42) 2025-06-05 19:35:09.496514 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42) 2025-06-05 19:35:09.497553 | orchestrator | 2025-06-05 19:35:09.499015 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:09.499167 | orchestrator | Thursday 05 June 2025 19:35:09 +0000 (0:00:00.404) 0:00:48.454 ********* 2025-06-05 19:35:09.894867 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8) 2025-06-05 19:35:09.895704 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8) 2025-06-05 19:35:09.896251 | orchestrator | 2025-06-05 19:35:09.897081 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:09.897977 | orchestrator | Thursday 05 June 2025 19:35:09 +0000 (0:00:00.399) 0:00:48.853 ********* 2025-06-05 19:35:10.310260 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e) 2025-06-05 19:35:10.310830 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e) 2025-06-05 19:35:10.311464 | orchestrator | 2025-06-05 19:35:10.313631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:10.314397 | orchestrator | Thursday 05 June 2025 19:35:10 +0000 (0:00:00.414) 0:00:49.268 ********* 2025-06-05 19:35:10.735807 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b) 2025-06-05 19:35:10.736118 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b) 2025-06-05 19:35:10.737093 | orchestrator | 2025-06-05 19:35:10.737730 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-05 19:35:10.738355 | orchestrator | Thursday 05 June 2025 19:35:10 +0000 (0:00:00.425) 0:00:49.693 ********* 2025-06-05 19:35:11.051063 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-05 19:35:11.051269 | orchestrator | 2025-06-05 19:35:11.051338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:11.051812 | orchestrator | Thursday 05 June 2025 19:35:11 +0000 (0:00:00.315) 0:00:50.008 ********* 2025-06-05 19:35:11.446841 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-05 19:35:11.447376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-05 19:35:11.448593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-05 19:35:11.452829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-05 19:35:11.452856 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-05 19:35:11.452868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-05 19:35:11.452880 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-05 19:35:11.452892 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-05 19:35:11.453522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-05 19:35:11.454364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-05 19:35:11.454844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-05 19:35:11.455315 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-05 19:35:11.455918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-05 19:35:11.456412 | orchestrator | 2025-06-05 19:35:11.457587 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:11.458283 | orchestrator | Thursday 05 June 2025 19:35:11 +0000 (0:00:00.396) 0:00:50.405 ********* 2025-06-05 19:35:11.636182 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:11.636308 | orchestrator | 2025-06-05 19:35:11.636399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:11.636709 | orchestrator | Thursday 05 June 2025 19:35:11 +0000 (0:00:00.189) 0:00:50.595 ********* 2025-06-05 19:35:11.836760 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:11.837899 | orchestrator | 2025-06-05 19:35:11.838682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:11.839523 | orchestrator | Thursday 05 June 2025 19:35:11 +0000 (0:00:00.197) 0:00:50.792 ********* 2025-06-05 19:35:12.433003 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:12.433710 | orchestrator | 2025-06-05 19:35:12.434973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:12.436262 | orchestrator | Thursday 05 June 2025 19:35:12 +0000 (0:00:00.597) 0:00:51.389 ********* 2025-06-05 19:35:12.656411 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:12.656717 | orchestrator | 2025-06-05 19:35:12.657945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:12.659407 | orchestrator | Thursday 05 June 2025 19:35:12 +0000 (0:00:00.224) 0:00:51.614 ********* 2025-06-05 19:35:12.874116 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:12.874283 | orchestrator | 2025-06-05 19:35:12.874713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:12.875425 | orchestrator | Thursday 05 June 2025 19:35:12 +0000 (0:00:00.219) 0:00:51.833 ********* 2025-06-05 19:35:13.058252 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:13.060682 | orchestrator | 2025-06-05 19:35:13.061765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:13.063159 | orchestrator | Thursday 05 June 2025 19:35:13 +0000 (0:00:00.183) 0:00:52.016 ********* 2025-06-05 19:35:13.255865 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:13.256014 | orchestrator | 2025-06-05 19:35:13.256032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:13.256114 | orchestrator | Thursday 05 June 2025 19:35:13 +0000 (0:00:00.198) 0:00:52.215 ********* 2025-06-05 19:35:13.453995 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:13.454146 | orchestrator | 2025-06-05 19:35:13.454163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:13.454176 | orchestrator | Thursday 05 June 2025 19:35:13 +0000 (0:00:00.194) 0:00:52.409 ********* 2025-06-05 19:35:14.085150 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-05 19:35:14.086818 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-05 19:35:14.088301 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-05 19:35:14.088337 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-05 19:35:14.089159 | orchestrator | 2025-06-05 19:35:14.089971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:14.090823 | orchestrator | Thursday 05 June 2025 19:35:14 +0000 (0:00:00.633) 0:00:53.042 ********* 2025-06-05 19:35:14.286853 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:14.287370 | orchestrator | 2025-06-05 19:35:14.288372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:14.289424 | orchestrator | Thursday 05 June 2025 19:35:14 +0000 (0:00:00.202) 0:00:53.245 ********* 2025-06-05 19:35:14.480316 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:14.481129 | orchestrator | 2025-06-05 19:35:14.482118 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:14.483363 | orchestrator | Thursday 05 June 2025 19:35:14 +0000 (0:00:00.193) 0:00:53.439 ********* 2025-06-05 19:35:14.674423 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:14.675380 | orchestrator | 2025-06-05 19:35:14.677737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-05 19:35:14.678491 | orchestrator | Thursday 05 June 2025 19:35:14 +0000 (0:00:00.194) 0:00:53.633 ********* 2025-06-05 19:35:14.865189 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:14.865294 | orchestrator | 2025-06-05 19:35:14.866154 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-05 19:35:14.867047 | orchestrator | Thursday 05 June 2025 19:35:14 +0000 (0:00:00.191) 0:00:53.825 ********* 2025-06-05 19:35:15.180371 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:15.181366 | orchestrator | 2025-06-05 19:35:15.183187 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-05 19:35:15.183522 | orchestrator | Thursday 05 June 2025 19:35:15 +0000 (0:00:00.313) 0:00:54.138 ********* 2025-06-05 19:35:15.373764 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8d24cd11-dfc5-563c-af80-3beb61f8ef58'}}) 2025-06-05 19:35:15.373885 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'afd5871a-1fd2-5e8b-989c-517ad42902e5'}}) 2025-06-05 19:35:15.374074 | orchestrator | 2025-06-05 19:35:15.374799 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-05 19:35:15.375294 | orchestrator | Thursday 05 June 2025 19:35:15 +0000 (0:00:00.193) 0:00:54.331 ********* 2025-06-05 19:35:17.159633 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'}) 2025-06-05 19:35:17.159859 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'}) 2025-06-05 19:35:17.160951 | orchestrator | 2025-06-05 19:35:17.162201 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-05 19:35:17.163465 | orchestrator | Thursday 05 June 2025 19:35:17 +0000 (0:00:01.785) 0:00:56.116 ********* 2025-06-05 19:35:17.318874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:17.319509 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:17.319866 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:17.320780 | orchestrator | 2025-06-05 19:35:17.321429 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-05 19:35:17.322281 | orchestrator | Thursday 05 June 2025 19:35:17 +0000 (0:00:00.161) 0:00:56.277 ********* 2025-06-05 19:35:18.613986 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'}) 2025-06-05 19:35:18.614614 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'}) 2025-06-05 19:35:18.615462 | orchestrator | 2025-06-05 19:35:18.617498 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-05 19:35:18.617922 | orchestrator | Thursday 05 June 2025 19:35:18 +0000 (0:00:01.293) 0:00:57.571 ********* 2025-06-05 19:35:18.764730 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:18.765293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:18.766645 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:18.767617 | orchestrator | 2025-06-05 19:35:18.769032 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-05 19:35:18.769738 | orchestrator | Thursday 05 June 2025 19:35:18 +0000 (0:00:00.151) 0:00:57.723 ********* 2025-06-05 19:35:18.905347 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:18.905449 | orchestrator | 2025-06-05 19:35:18.906379 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-05 19:35:18.907310 | orchestrator | Thursday 05 June 2025 19:35:18 +0000 (0:00:00.140) 0:00:57.863 ********* 2025-06-05 19:35:19.051715 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:19.051810 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:19.053246 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:19.053273 | orchestrator | 2025-06-05 19:35:19.054107 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-05 19:35:19.055061 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.143) 0:00:58.007 ********* 2025-06-05 19:35:19.176833 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:19.177288 | orchestrator | 2025-06-05 19:35:19.178254 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-05 19:35:19.179741 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.127) 0:00:58.135 ********* 2025-06-05 19:35:19.322379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:19.322623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:19.323447 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:19.325334 | orchestrator | 2025-06-05 19:35:19.325743 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-05 19:35:19.326174 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.144) 0:00:58.279 ********* 2025-06-05 19:35:19.443641 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:19.443870 | orchestrator | 2025-06-05 19:35:19.444786 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-05 19:35:19.445332 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.123) 0:00:58.402 ********* 2025-06-05 19:35:19.614312 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:19.615948 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:19.617236 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:19.619402 | orchestrator | 2025-06-05 19:35:19.620684 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-05 19:35:19.621874 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.169) 0:00:58.572 ********* 2025-06-05 19:35:19.782171 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:19.782408 | orchestrator | 2025-06-05 19:35:19.783030 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-05 19:35:19.784126 | orchestrator | Thursday 05 June 2025 19:35:19 +0000 (0:00:00.168) 0:00:58.740 ********* 2025-06-05 19:35:20.151069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:20.151174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:20.151668 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.152361 | orchestrator | 2025-06-05 19:35:20.152988 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-05 19:35:20.153412 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.369) 0:00:59.110 ********* 2025-06-05 19:35:20.318103 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:20.319683 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:20.319823 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.320465 | orchestrator | 2025-06-05 19:35:20.321114 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-05 19:35:20.321537 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.164) 0:00:59.274 ********* 2025-06-05 19:35:20.468165 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:20.468835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:20.469270 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.469976 | orchestrator | 2025-06-05 19:35:20.470892 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-05 19:35:20.471661 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.152) 0:00:59.427 ********* 2025-06-05 19:35:20.598609 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.598817 | orchestrator | 2025-06-05 19:35:20.599537 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-05 19:35:20.600336 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.130) 0:00:59.557 ********* 2025-06-05 19:35:20.740264 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.741166 | orchestrator | 2025-06-05 19:35:20.741935 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-05 19:35:20.742767 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.141) 0:00:59.699 ********* 2025-06-05 19:35:20.874538 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:20.875940 | orchestrator | 2025-06-05 19:35:20.876264 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-05 19:35:20.877175 | orchestrator | Thursday 05 June 2025 19:35:20 +0000 (0:00:00.133) 0:00:59.833 ********* 2025-06-05 19:35:21.006310 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:35:21.006718 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-05 19:35:21.006750 | orchestrator | } 2025-06-05 19:35:21.007598 | orchestrator | 2025-06-05 19:35:21.007636 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-05 19:35:21.008006 | orchestrator | Thursday 05 June 2025 19:35:21 +0000 (0:00:00.131) 0:00:59.964 ********* 2025-06-05 19:35:21.150310 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:35:21.150486 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-05 19:35:21.151339 | orchestrator | } 2025-06-05 19:35:21.151423 | orchestrator | 2025-06-05 19:35:21.152371 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-05 19:35:21.152462 | orchestrator | Thursday 05 June 2025 19:35:21 +0000 (0:00:00.144) 0:01:00.109 ********* 2025-06-05 19:35:21.294491 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:35:21.294657 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-05 19:35:21.294675 | orchestrator | } 2025-06-05 19:35:21.294688 | orchestrator | 2025-06-05 19:35:21.294700 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-05 19:35:21.295306 | orchestrator | Thursday 05 June 2025 19:35:21 +0000 (0:00:00.142) 0:01:00.252 ********* 2025-06-05 19:35:21.791459 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:21.791815 | orchestrator | 2025-06-05 19:35:21.792445 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-05 19:35:21.793377 | orchestrator | Thursday 05 June 2025 19:35:21 +0000 (0:00:00.498) 0:01:00.750 ********* 2025-06-05 19:35:22.292485 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:22.293216 | orchestrator | 2025-06-05 19:35:22.295341 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-05 19:35:22.296116 | orchestrator | Thursday 05 June 2025 19:35:22 +0000 (0:00:00.499) 0:01:01.249 ********* 2025-06-05 19:35:22.801851 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:22.802244 | orchestrator | 2025-06-05 19:35:22.802970 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-05 19:35:22.803398 | orchestrator | Thursday 05 June 2025 19:35:22 +0000 (0:00:00.511) 0:01:01.761 ********* 2025-06-05 19:35:23.143502 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:23.143657 | orchestrator | 2025-06-05 19:35:23.144325 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-05 19:35:23.145104 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.341) 0:01:02.102 ********* 2025-06-05 19:35:23.251057 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:23.251392 | orchestrator | 2025-06-05 19:35:23.252086 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-05 19:35:23.253403 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.108) 0:01:02.210 ********* 2025-06-05 19:35:23.360077 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:23.360857 | orchestrator | 2025-06-05 19:35:23.361417 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-05 19:35:23.363024 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.107) 0:01:02.318 ********* 2025-06-05 19:35:23.493150 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:35:23.493312 | orchestrator |  "vgs_report": { 2025-06-05 19:35:23.494400 | orchestrator |  "vg": [] 2025-06-05 19:35:23.495271 | orchestrator |  } 2025-06-05 19:35:23.496636 | orchestrator | } 2025-06-05 19:35:23.498646 | orchestrator | 2025-06-05 19:35:23.498778 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-05 19:35:23.499340 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.134) 0:01:02.452 ********* 2025-06-05 19:35:23.622948 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:23.623625 | orchestrator | 2025-06-05 19:35:23.624667 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-05 19:35:23.626689 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.129) 0:01:02.582 ********* 2025-06-05 19:35:23.752272 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:23.752801 | orchestrator | 2025-06-05 19:35:23.753962 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-05 19:35:23.754723 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.129) 0:01:02.711 ********* 2025-06-05 19:35:23.880675 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:23.881949 | orchestrator | 2025-06-05 19:35:23.884233 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-05 19:35:23.884694 | orchestrator | Thursday 05 June 2025 19:35:23 +0000 (0:00:00.127) 0:01:02.838 ********* 2025-06-05 19:35:24.017030 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.017247 | orchestrator | 2025-06-05 19:35:24.019449 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-05 19:35:24.020633 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.136) 0:01:02.975 ********* 2025-06-05 19:35:24.159747 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.159954 | orchestrator | 2025-06-05 19:35:24.161134 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-05 19:35:24.162354 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.144) 0:01:03.119 ********* 2025-06-05 19:35:24.337487 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.337825 | orchestrator | 2025-06-05 19:35:24.339969 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-05 19:35:24.341095 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.175) 0:01:03.294 ********* 2025-06-05 19:35:24.490387 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.491692 | orchestrator | 2025-06-05 19:35:24.492473 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-05 19:35:24.493198 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.154) 0:01:03.449 ********* 2025-06-05 19:35:24.629823 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.630211 | orchestrator | 2025-06-05 19:35:24.631513 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-05 19:35:24.633279 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.139) 0:01:03.588 ********* 2025-06-05 19:35:24.967891 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:24.970291 | orchestrator | 2025-06-05 19:35:24.970359 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-05 19:35:24.971055 | orchestrator | Thursday 05 June 2025 19:35:24 +0000 (0:00:00.337) 0:01:03.925 ********* 2025-06-05 19:35:25.104971 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.106180 | orchestrator | 2025-06-05 19:35:25.106694 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-05 19:35:25.107872 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.138) 0:01:04.064 ********* 2025-06-05 19:35:25.236868 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.237080 | orchestrator | 2025-06-05 19:35:25.238138 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-05 19:35:25.238836 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.131) 0:01:04.195 ********* 2025-06-05 19:35:25.364178 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.365395 | orchestrator | 2025-06-05 19:35:25.365786 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-05 19:35:25.368016 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.126) 0:01:04.321 ********* 2025-06-05 19:35:25.499039 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.499796 | orchestrator | 2025-06-05 19:35:25.500643 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-05 19:35:25.501870 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.136) 0:01:04.458 ********* 2025-06-05 19:35:25.639437 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.640374 | orchestrator | 2025-06-05 19:35:25.641513 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-05 19:35:25.642965 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.139) 0:01:04.598 ********* 2025-06-05 19:35:25.785334 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:25.786583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:25.787369 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.788105 | orchestrator | 2025-06-05 19:35:25.789015 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-05 19:35:25.789724 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.145) 0:01:04.744 ********* 2025-06-05 19:35:25.923260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:25.924713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:25.925304 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:25.926173 | orchestrator | 2025-06-05 19:35:25.927211 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-05 19:35:25.928227 | orchestrator | Thursday 05 June 2025 19:35:25 +0000 (0:00:00.137) 0:01:04.881 ********* 2025-06-05 19:35:26.058636 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:26.059001 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:26.059532 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:26.060348 | orchestrator | 2025-06-05 19:35:26.062288 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-05 19:35:26.062310 | orchestrator | Thursday 05 June 2025 19:35:26 +0000 (0:00:00.135) 0:01:05.017 ********* 2025-06-05 19:35:26.208249 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:26.208805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:26.209983 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:26.210771 | orchestrator | 2025-06-05 19:35:26.211853 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-05 19:35:26.212697 | orchestrator | Thursday 05 June 2025 19:35:26 +0000 (0:00:00.149) 0:01:05.166 ********* 2025-06-05 19:35:26.358854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:26.359885 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:26.361539 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:26.362396 | orchestrator | 2025-06-05 19:35:26.363661 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-05 19:35:26.363834 | orchestrator | Thursday 05 June 2025 19:35:26 +0000 (0:00:00.150) 0:01:05.317 ********* 2025-06-05 19:35:26.502900 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:26.502989 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:26.503003 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:26.503481 | orchestrator | 2025-06-05 19:35:26.503954 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-05 19:35:26.504481 | orchestrator | Thursday 05 June 2025 19:35:26 +0000 (0:00:00.144) 0:01:05.462 ********* 2025-06-05 19:35:26.854681 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:26.855113 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:26.855612 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:26.856426 | orchestrator | 2025-06-05 19:35:26.857129 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-05 19:35:26.857801 | orchestrator | Thursday 05 June 2025 19:35:26 +0000 (0:00:00.352) 0:01:05.814 ********* 2025-06-05 19:35:27.009074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:27.009171 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:27.009435 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:27.010431 | orchestrator | 2025-06-05 19:35:27.010888 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-05 19:35:27.012409 | orchestrator | Thursday 05 June 2025 19:35:27 +0000 (0:00:00.151) 0:01:05.966 ********* 2025-06-05 19:35:27.514841 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:27.515005 | orchestrator | 2025-06-05 19:35:27.515631 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-05 19:35:27.516644 | orchestrator | Thursday 05 June 2025 19:35:27 +0000 (0:00:00.505) 0:01:06.471 ********* 2025-06-05 19:35:28.027429 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:28.027772 | orchestrator | 2025-06-05 19:35:28.028378 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-05 19:35:28.029272 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.513) 0:01:06.985 ********* 2025-06-05 19:35:28.172672 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:28.173062 | orchestrator | 2025-06-05 19:35:28.173638 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-05 19:35:28.174383 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.146) 0:01:07.132 ********* 2025-06-05 19:35:28.339956 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'vg_name': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'}) 2025-06-05 19:35:28.340342 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'vg_name': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'}) 2025-06-05 19:35:28.340823 | orchestrator | 2025-06-05 19:35:28.341601 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-05 19:35:28.341951 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.167) 0:01:07.300 ********* 2025-06-05 19:35:28.492708 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:28.492819 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:28.493762 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:28.494829 | orchestrator | 2025-06-05 19:35:28.496268 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-05 19:35:28.497107 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.150) 0:01:07.450 ********* 2025-06-05 19:35:28.636369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:28.636615 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:28.637220 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:28.637757 | orchestrator | 2025-06-05 19:35:28.638980 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-05 19:35:28.639723 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.143) 0:01:07.593 ********* 2025-06-05 19:35:28.769946 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'})  2025-06-05 19:35:28.770530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'})  2025-06-05 19:35:28.771345 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:28.772148 | orchestrator | 2025-06-05 19:35:28.772821 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-05 19:35:28.773329 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.135) 0:01:07.729 ********* 2025-06-05 19:35:28.909756 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:35:28.909850 | orchestrator |  "lvm_report": { 2025-06-05 19:35:28.910588 | orchestrator |  "lv": [ 2025-06-05 19:35:28.911798 | orchestrator |  { 2025-06-05 19:35:28.912693 | orchestrator |  "lv_name": "osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58", 2025-06-05 19:35:28.912833 | orchestrator |  "vg_name": "ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58" 2025-06-05 19:35:28.913216 | orchestrator |  }, 2025-06-05 19:35:28.914419 | orchestrator |  { 2025-06-05 19:35:28.914674 | orchestrator |  "lv_name": "osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5", 2025-06-05 19:35:28.914703 | orchestrator |  "vg_name": "ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5" 2025-06-05 19:35:28.914779 | orchestrator |  } 2025-06-05 19:35:28.915417 | orchestrator |  ], 2025-06-05 19:35:28.916881 | orchestrator |  "pv": [ 2025-06-05 19:35:28.917329 | orchestrator |  { 2025-06-05 19:35:28.918155 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-05 19:35:28.918991 | orchestrator |  "vg_name": "ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58" 2025-06-05 19:35:28.919740 | orchestrator |  }, 2025-06-05 19:35:28.920628 | orchestrator |  { 2025-06-05 19:35:28.921169 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-05 19:35:28.922170 | orchestrator |  "vg_name": "ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5" 2025-06-05 19:35:28.922502 | orchestrator |  } 2025-06-05 19:35:28.923420 | orchestrator |  ] 2025-06-05 19:35:28.923876 | orchestrator |  } 2025-06-05 19:35:28.924670 | orchestrator | } 2025-06-05 19:35:28.925425 | orchestrator | 2025-06-05 19:35:28.925915 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:35:28.926688 | orchestrator | 2025-06-05 19:35:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:35:28.926804 | orchestrator | 2025-06-05 19:35:28 | INFO  | Please wait and do not abort execution. 2025-06-05 19:35:28.927373 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-05 19:35:28.927980 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-05 19:35:28.928393 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-05 19:35:28.931744 | orchestrator | 2025-06-05 19:35:28.932494 | orchestrator | 2025-06-05 19:35:28.932791 | orchestrator | 2025-06-05 19:35:28.933497 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:35:28.933820 | orchestrator | Thursday 05 June 2025 19:35:28 +0000 (0:00:00.139) 0:01:07.869 ********* 2025-06-05 19:35:28.934268 | orchestrator | =============================================================================== 2025-06-05 19:35:28.934982 | orchestrator | Create block VGs -------------------------------------------------------- 5.61s 2025-06-05 19:35:28.935229 | orchestrator | Create block LVs -------------------------------------------------------- 4.00s 2025-06-05 19:35:28.935599 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.72s 2025-06-05 19:35:28.935971 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-06-05 19:35:28.936285 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.53s 2025-06-05 19:35:28.936711 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.51s 2025-06-05 19:35:28.937165 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.49s 2025-06-05 19:35:28.937468 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-06-05 19:35:28.938481 | orchestrator | Add known links to the list of available block devices ------------------ 1.13s 2025-06-05 19:35:28.938911 | orchestrator | Add known partitions to the list of available block devices ------------- 0.98s 2025-06-05 19:35:28.939250 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2025-06-05 19:35:28.939596 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-06-05 19:35:28.940053 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.69s 2025-06-05 19:35:28.940356 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.66s 2025-06-05 19:35:28.940809 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.65s 2025-06-05 19:35:28.941058 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.65s 2025-06-05 19:35:28.941460 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.64s 2025-06-05 19:35:28.941740 | orchestrator | Get initial list of available block devices ----------------------------- 0.64s 2025-06-05 19:35:28.942399 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.63s 2025-06-05 19:35:28.943150 | orchestrator | Add known partitions to the list of available block devices ------------- 0.63s 2025-06-05 19:35:31.168244 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:35:31.168342 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:35:31.168356 | orchestrator | Registering Redlock._release_script 2025-06-05 19:35:31.225134 | orchestrator | 2025-06-05 19:35:31 | INFO  | Task 9dd645e2-fe3e-4897-994c-4b78cf76fbab (facts) was prepared for execution. 2025-06-05 19:35:31.225226 | orchestrator | 2025-06-05 19:35:31 | INFO  | It takes a moment until task 9dd645e2-fe3e-4897-994c-4b78cf76fbab (facts) has been started and output is visible here. 2025-06-05 19:35:35.203083 | orchestrator | 2025-06-05 19:35:35.204116 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-05 19:35:35.206659 | orchestrator | 2025-06-05 19:35:35.206712 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-05 19:35:35.206726 | orchestrator | Thursday 05 June 2025 19:35:35 +0000 (0:00:00.262) 0:00:00.262 ********* 2025-06-05 19:35:36.101024 | orchestrator | ok: [testbed-manager] 2025-06-05 19:35:36.101818 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:35:36.102637 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:35:36.103417 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:35:36.104297 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:35:36.105217 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:36.105935 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:36.106939 | orchestrator | 2025-06-05 19:35:36.107490 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-05 19:35:36.108351 | orchestrator | Thursday 05 June 2025 19:35:36 +0000 (0:00:00.897) 0:00:01.159 ********* 2025-06-05 19:35:36.243261 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:35:36.315591 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:35:36.386285 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:35:36.454146 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:35:36.520883 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:35:37.157973 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:37.158957 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:37.159696 | orchestrator | 2025-06-05 19:35:37.160434 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 19:35:37.161003 | orchestrator | 2025-06-05 19:35:37.162171 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 19:35:37.162625 | orchestrator | Thursday 05 June 2025 19:35:37 +0000 (0:00:01.061) 0:00:02.220 ********* 2025-06-05 19:35:41.874927 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:35:41.875130 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:35:41.876293 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:35:41.879683 | orchestrator | ok: [testbed-manager] 2025-06-05 19:35:41.879724 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:35:41.879737 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:35:41.879748 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:35:41.880429 | orchestrator | 2025-06-05 19:35:41.880741 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-05 19:35:41.883043 | orchestrator | 2025-06-05 19:35:41.883300 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-05 19:35:41.884166 | orchestrator | Thursday 05 June 2025 19:35:41 +0000 (0:00:04.716) 0:00:06.936 ********* 2025-06-05 19:35:42.039990 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:35:42.132870 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:35:42.208683 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:35:42.283306 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:35:42.356820 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:35:42.393052 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:35:42.393149 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:35:42.393639 | orchestrator | 2025-06-05 19:35:42.394422 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:35:42.394655 | orchestrator | 2025-06-05 19:35:42 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 19:35:42.394679 | orchestrator | 2025-06-05 19:35:42 | INFO  | Please wait and do not abort execution. 2025-06-05 19:35:42.395273 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.396142 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.396830 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.397165 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.398465 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.399573 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.400440 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:35:42.401078 | orchestrator | 2025-06-05 19:35:42.401635 | orchestrator | 2025-06-05 19:35:42.402294 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:35:42.402813 | orchestrator | Thursday 05 June 2025 19:35:42 +0000 (0:00:00.516) 0:00:07.453 ********* 2025-06-05 19:35:42.403409 | orchestrator | =============================================================================== 2025-06-05 19:35:42.404041 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.72s 2025-06-05 19:35:42.404720 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-06-05 19:35:42.405152 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2025-06-05 19:35:42.405645 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-06-05 19:35:42.981601 | orchestrator | 2025-06-05 19:35:42.983186 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Jun 5 19:35:42 UTC 2025 2025-06-05 19:35:42.983218 | orchestrator | 2025-06-05 19:35:44.610086 | orchestrator | 2025-06-05 19:35:44 | INFO  | Collection nutshell is prepared for execution 2025-06-05 19:35:44.610157 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [0] - dotfiles 2025-06-05 19:35:44.615258 | orchestrator | Registering Redlock._acquired_script 2025-06-05 19:35:44.616079 | orchestrator | Registering Redlock._extend_script 2025-06-05 19:35:44.616147 | orchestrator | Registering Redlock._release_script 2025-06-05 19:35:44.619362 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [0] - homer 2025-06-05 19:35:44.619394 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [0] - netdata 2025-06-05 19:35:44.619587 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [0] - openstackclient 2025-06-05 19:35:44.619605 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [0] - phpmyadmin 2025-06-05 19:35:44.619949 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [0] - common 2025-06-05 19:35:44.621078 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [1] -- loadbalancer 2025-06-05 19:35:44.621477 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [2] --- opensearch 2025-06-05 19:35:44.621500 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [2] --- mariadb-ng 2025-06-05 19:35:44.621510 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [3] ---- horizon 2025-06-05 19:35:44.621519 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [3] ---- keystone 2025-06-05 19:35:44.621552 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [4] ----- neutron 2025-06-05 19:35:44.621607 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ wait-for-nova 2025-06-05 19:35:44.621620 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [5] ------ octavia 2025-06-05 19:35:44.621960 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- barbican 2025-06-05 19:35:44.621994 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- designate 2025-06-05 19:35:44.622145 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- ironic 2025-06-05 19:35:44.622198 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- placement 2025-06-05 19:35:44.622210 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- magnum 2025-06-05 19:35:44.623013 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [1] -- openvswitch 2025-06-05 19:35:44.623034 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [2] --- ovn 2025-06-05 19:35:44.623045 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [1] -- memcached 2025-06-05 19:35:44.623391 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [1] -- redis 2025-06-05 19:35:44.623409 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [1] -- rabbitmq-ng 2025-06-05 19:35:44.623418 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [0] - kubernetes 2025-06-05 19:35:44.624781 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [1] -- kubeconfig 2025-06-05 19:35:44.624801 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [1] -- copy-kubeconfig 2025-06-05 19:35:44.625018 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [0] - ceph 2025-06-05 19:35:44.626447 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [1] -- ceph-pools 2025-06-05 19:35:44.626467 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [2] --- copy-ceph-keys 2025-06-05 19:35:44.626477 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [3] ---- cephclient 2025-06-05 19:35:44.626487 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-05 19:35:44.626496 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [4] ----- wait-for-keystone 2025-06-05 19:35:44.626580 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-05 19:35:44.626594 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ glance 2025-06-05 19:35:44.626960 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ cinder 2025-06-05 19:35:44.626979 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ nova 2025-06-05 19:35:44.626988 | orchestrator | 2025-06-05 19:35:44 | INFO  | A [4] ----- prometheus 2025-06-05 19:35:44.627180 | orchestrator | 2025-06-05 19:35:44 | INFO  | D [5] ------ grafana 2025-06-05 19:35:44.797604 | orchestrator | 2025-06-05 19:35:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-05 19:35:44.800405 | orchestrator | 2025-06-05 19:35:44 | INFO  | Tasks are running in the background 2025-06-05 19:35:47.407039 | orchestrator | 2025-06-05 19:35:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-05 19:35:49.542664 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:35:49.543838 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:35:49.546340 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:35:49.547319 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:35:49.547355 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:35:49.547653 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:35:49.548105 | orchestrator | 2025-06-05 19:35:49 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:35:49.548126 | orchestrator | 2025-06-05 19:35:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:35:52.614410 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:35:52.616040 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:35:52.616763 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:35:52.619718 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:35:52.620356 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:35:52.620861 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:35:52.621660 | orchestrator | 2025-06-05 19:35:52 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:35:52.621767 | orchestrator | 2025-06-05 19:35:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:35:55.671121 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:35:55.671508 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:35:55.671976 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:35:55.672687 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:35:55.674930 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:35:55.675328 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:35:55.684745 | orchestrator | 2025-06-05 19:35:55 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:35:55.684800 | orchestrator | 2025-06-05 19:35:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:35:58.734638 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:35:58.734728 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:35:58.740781 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:35:58.740860 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:35:58.745039 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:35:58.745092 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:35:58.751889 | orchestrator | 2025-06-05 19:35:58 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:35:58.751953 | orchestrator | 2025-06-05 19:35:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:01.799601 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:01.799961 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:36:01.801903 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:01.803485 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:01.804174 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:01.811238 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:01.811655 | orchestrator | 2025-06-05 19:36:01 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:01.811676 | orchestrator | 2025-06-05 19:36:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:04.849423 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:04.851669 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:36:04.855142 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:04.859107 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:04.861917 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:04.864239 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:04.865491 | orchestrator | 2025-06-05 19:36:04 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:04.865613 | orchestrator | 2025-06-05 19:36:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:07.923854 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:07.923941 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:36:07.924271 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:07.927876 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:07.928170 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:07.930173 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:07.933478 | orchestrator | 2025-06-05 19:36:07 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:07.933532 | orchestrator | 2025-06-05 19:36:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:10.985169 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:10.985225 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state STARTED 2025-06-05 19:36:10.985257 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:10.988411 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:10.995335 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:10.999080 | orchestrator | 2025-06-05 19:36:10 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:11.000478 | orchestrator | 2025-06-05 19:36:11 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:11.000845 | orchestrator | 2025-06-05 19:36:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:14.051699 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:14.051780 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 7c0d3edb-23f7-48f5-b69f-a7897e00aeda is in state SUCCESS 2025-06-05 19:36:14.052858 | orchestrator | 2025-06-05 19:36:14.052905 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-05 19:36:14.052920 | orchestrator | 2025-06-05 19:36:14.052931 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-05 19:36:14.052946 | orchestrator | Thursday 05 June 2025 19:35:55 +0000 (0:00:00.251) 0:00:00.251 ********* 2025-06-05 19:36:14.052966 | orchestrator | changed: [testbed-manager] 2025-06-05 19:36:14.052985 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:36:14.053004 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:36:14.053024 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:36:14.053042 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:36:14.053063 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:36:14.053082 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:36:14.053096 | orchestrator | 2025-06-05 19:36:14.053107 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-05 19:36:14.053118 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:04.010) 0:00:04.261 ********* 2025-06-05 19:36:14.053130 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-05 19:36:14.053141 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-05 19:36:14.053152 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-05 19:36:14.053163 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-05 19:36:14.053174 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-05 19:36:14.053185 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-05 19:36:14.053198 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-05 19:36:14.053209 | orchestrator | 2025-06-05 19:36:14.053220 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-05 19:36:14.053232 | orchestrator | Thursday 05 June 2025 19:36:01 +0000 (0:00:01.980) 0:00:06.241 ********* 2025-06-05 19:36:14.053253 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:00.650212', 'end': '2025-06-05 19:36:00.653998', 'delta': '0:00:00.003786', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053289 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:00.757483', 'end': '2025-06-05 19:36:00.767539', 'delta': '0:00:00.010056', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053303 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:00.876503', 'end': '2025-06-05 19:36:00.885784', 'delta': '0:00:00.009281', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053330 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:01.076050', 'end': '2025-06-05 19:36:01.084708', 'delta': '0:00:00.008658', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053385 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:01.278979', 'end': '2025-06-05 19:36:01.288012', 'delta': '0:00:00.009033', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053402 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:01.384367', 'end': '2025-06-05 19:36:01.392774', 'delta': '0:00:00.008407', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053469 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-05 19:36:01.500400', 'end': '2025-06-05 19:36:01.506187', 'delta': '0:00:00.005787', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-05 19:36:14.053485 | orchestrator | 2025-06-05 19:36:14.053543 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-05 19:36:14.053557 | orchestrator | Thursday 05 June 2025 19:36:04 +0000 (0:00:02.559) 0:00:08.800 ********* 2025-06-05 19:36:14.053570 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-05 19:36:14.053583 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-05 19:36:14.053596 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-05 19:36:14.053609 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-05 19:36:14.053622 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-05 19:36:14.053635 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-05 19:36:14.053647 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-05 19:36:14.053660 | orchestrator | 2025-06-05 19:36:14.053673 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-05 19:36:14.053686 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:02.283) 0:00:11.083 ********* 2025-06-05 19:36:14.053698 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-05 19:36:14.053711 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-05 19:36:14.053724 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-05 19:36:14.053737 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-05 19:36:14.053750 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-05 19:36:14.053762 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-05 19:36:14.053775 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-05 19:36:14.053788 | orchestrator | 2025-06-05 19:36:14.053801 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:36:14.053823 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053837 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053849 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053860 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053871 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053882 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053901 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:36:14.053912 | orchestrator | 2025-06-05 19:36:14.053923 | orchestrator | 2025-06-05 19:36:14.053934 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:36:14.053945 | orchestrator | Thursday 05 June 2025 19:36:10 +0000 (0:00:03.590) 0:00:14.674 ********* 2025-06-05 19:36:14.053956 | orchestrator | =============================================================================== 2025-06-05 19:36:14.053967 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.01s 2025-06-05 19:36:14.053978 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.59s 2025-06-05 19:36:14.053989 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.56s 2025-06-05 19:36:14.054000 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.28s 2025-06-05 19:36:14.054011 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.98s 2025-06-05 19:36:14.054130 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:14.054238 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:14.054342 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:14.055560 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:14.056743 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:14.057879 | orchestrator | 2025-06-05 19:36:14 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:14.058086 | orchestrator | 2025-06-05 19:36:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:17.114872 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:17.117958 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:17.119255 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:17.121316 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:17.122831 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:17.123425 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:17.126991 | orchestrator | 2025-06-05 19:36:17 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:17.127083 | orchestrator | 2025-06-05 19:36:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:20.170774 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:20.179046 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:20.179322 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:20.179752 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:20.180250 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:20.180874 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:20.181477 | orchestrator | 2025-06-05 19:36:20 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:20.181572 | orchestrator | 2025-06-05 19:36:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:23.259465 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:23.259599 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:23.259783 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:23.260576 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:23.261206 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:23.262344 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:23.263050 | orchestrator | 2025-06-05 19:36:23 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:23.263284 | orchestrator | 2025-06-05 19:36:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:26.318859 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:26.319145 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:26.321685 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:26.322178 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:26.324809 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:26.326833 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:26.329046 | orchestrator | 2025-06-05 19:36:26 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:26.329082 | orchestrator | 2025-06-05 19:36:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:29.430365 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state STARTED 2025-06-05 19:36:29.433912 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:29.441257 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:29.455116 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:29.455187 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:29.457045 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:29.459055 | orchestrator | 2025-06-05 19:36:29 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:29.459104 | orchestrator | 2025-06-05 19:36:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:32.562527 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task aa82bc3a-9be0-4015-8721-27b9aece015a is in state SUCCESS 2025-06-05 19:36:32.562641 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:32.562700 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:32.565090 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:32.566994 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:32.567554 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:32.569848 | orchestrator | 2025-06-05 19:36:32 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:32.569880 | orchestrator | 2025-06-05 19:36:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:35.617693 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:35.618618 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:35.621379 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:35.621413 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:35.625060 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:35.625090 | orchestrator | 2025-06-05 19:36:35 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:35.625102 | orchestrator | 2025-06-05 19:36:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:38.669666 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:38.672254 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:38.679342 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:38.684815 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:38.685717 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:38.688164 | orchestrator | 2025-06-05 19:36:38 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:38.688196 | orchestrator | 2025-06-05 19:36:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:41.732810 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:41.732903 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:41.732918 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:41.733144 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:41.736887 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:41.738135 | orchestrator | 2025-06-05 19:36:41 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:41.738171 | orchestrator | 2025-06-05 19:36:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:44.817022 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:44.817885 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:44.819179 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:44.822642 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:44.822828 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:44.823225 | orchestrator | 2025-06-05 19:36:44 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state STARTED 2025-06-05 19:36:44.823250 | orchestrator | 2025-06-05 19:36:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:47.854529 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:47.855626 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:47.862616 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:47.862649 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:47.862662 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:47.862673 | orchestrator | 2025-06-05 19:36:47 | INFO  | Task 07d3ec39-3398-47e8-8bb0-6e60524e5f82 is in state SUCCESS 2025-06-05 19:36:47.862685 | orchestrator | 2025-06-05 19:36:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:50.903186 | orchestrator | 2025-06-05 19:36:50 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:50.904530 | orchestrator | 2025-06-05 19:36:50 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:50.904601 | orchestrator | 2025-06-05 19:36:50 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:50.904616 | orchestrator | 2025-06-05 19:36:50 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:50.907675 | orchestrator | 2025-06-05 19:36:50 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:50.907766 | orchestrator | 2025-06-05 19:36:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:53.955340 | orchestrator | 2025-06-05 19:36:53 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:53.960766 | orchestrator | 2025-06-05 19:36:53 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:53.961024 | orchestrator | 2025-06-05 19:36:53 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:53.961887 | orchestrator | 2025-06-05 19:36:53 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:53.964960 | orchestrator | 2025-06-05 19:36:53 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:53.964991 | orchestrator | 2025-06-05 19:36:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:36:57.001094 | orchestrator | 2025-06-05 19:36:57 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:36:57.002532 | orchestrator | 2025-06-05 19:36:57 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:36:57.004614 | orchestrator | 2025-06-05 19:36:57 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:36:57.005676 | orchestrator | 2025-06-05 19:36:57 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:36:57.007187 | orchestrator | 2025-06-05 19:36:57 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state STARTED 2025-06-05 19:36:57.007734 | orchestrator | 2025-06-05 19:36:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:00.050554 | orchestrator | 2025-06-05 19:37:00 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:00.050790 | orchestrator | 2025-06-05 19:37:00 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:00.052243 | orchestrator | 2025-06-05 19:37:00 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:00.054663 | orchestrator | 2025-06-05 19:37:00 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:00.055097 | orchestrator | 2025-06-05 19:37:00 | INFO  | Task 171ad078-50af-4977-9b1d-6a059f545ea0 is in state SUCCESS 2025-06-05 19:37:00.057693 | orchestrator | 2025-06-05 19:37:00.057740 | orchestrator | 2025-06-05 19:37:00.057752 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-05 19:37:00.057764 | orchestrator | 2025-06-05 19:37:00.057776 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-05 19:37:00.057790 | orchestrator | Thursday 05 June 2025 19:35:57 +0000 (0:00:01.024) 0:00:01.024 ********* 2025-06-05 19:37:00.057802 | orchestrator | ok: [testbed-manager] => { 2025-06-05 19:37:00.057816 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-05 19:37:00.057829 | orchestrator | } 2025-06-05 19:37:00.057840 | orchestrator | 2025-06-05 19:37:00.057851 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-05 19:37:00.057862 | orchestrator | Thursday 05 June 2025 19:35:58 +0000 (0:00:00.270) 0:00:01.295 ********* 2025-06-05 19:37:00.057874 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.057886 | orchestrator | 2025-06-05 19:37:00.057897 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-05 19:37:00.057908 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:01.559) 0:00:02.854 ********* 2025-06-05 19:37:00.057919 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-05 19:37:00.057931 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-05 19:37:00.057942 | orchestrator | 2025-06-05 19:37:00.057953 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-05 19:37:00.057965 | orchestrator | Thursday 05 June 2025 19:36:00 +0000 (0:00:01.333) 0:00:04.187 ********* 2025-06-05 19:37:00.057985 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058003 | orchestrator | 2025-06-05 19:37:00.058119 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-05 19:37:00.058139 | orchestrator | Thursday 05 June 2025 19:36:02 +0000 (0:00:01.870) 0:00:06.058 ********* 2025-06-05 19:37:00.058150 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058161 | orchestrator | 2025-06-05 19:37:00.058172 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-05 19:37:00.058184 | orchestrator | Thursday 05 June 2025 19:36:04 +0000 (0:00:01.560) 0:00:07.618 ********* 2025-06-05 19:37:00.058195 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-05 19:37:00.058206 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.058217 | orchestrator | 2025-06-05 19:37:00.058228 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-05 19:37:00.058239 | orchestrator | Thursday 05 June 2025 19:36:29 +0000 (0:00:25.010) 0:00:32.629 ********* 2025-06-05 19:37:00.058250 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058282 | orchestrator | 2025-06-05 19:37:00.058295 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:37:00.058309 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.058323 | orchestrator | 2025-06-05 19:37:00.058335 | orchestrator | 2025-06-05 19:37:00.058349 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:37:00.058362 | orchestrator | Thursday 05 June 2025 19:36:31 +0000 (0:00:02.528) 0:00:35.157 ********* 2025-06-05 19:37:00.058375 | orchestrator | =============================================================================== 2025-06-05 19:37:00.058387 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.01s 2025-06-05 19:37:00.058400 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.53s 2025-06-05 19:37:00.058413 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 1.87s 2025-06-05 19:37:00.058426 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.56s 2025-06-05 19:37:00.058474 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.56s 2025-06-05 19:37:00.058487 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.33s 2025-06-05 19:37:00.058498 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.27s 2025-06-05 19:37:00.058509 | orchestrator | 2025-06-05 19:37:00.058520 | orchestrator | 2025-06-05 19:37:00.058532 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-05 19:37:00.058543 | orchestrator | 2025-06-05 19:37:00.058554 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-05 19:37:00.058565 | orchestrator | Thursday 05 June 2025 19:35:55 +0000 (0:00:00.643) 0:00:00.643 ********* 2025-06-05 19:37:00.058576 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-05 19:37:00.058589 | orchestrator | 2025-06-05 19:37:00.058600 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-05 19:37:00.058611 | orchestrator | Thursday 05 June 2025 19:35:56 +0000 (0:00:00.728) 0:00:01.372 ********* 2025-06-05 19:37:00.058621 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-05 19:37:00.058633 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-05 19:37:00.058644 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-05 19:37:00.058655 | orchestrator | 2025-06-05 19:37:00.058666 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-05 19:37:00.058677 | orchestrator | Thursday 05 June 2025 19:35:58 +0000 (0:00:01.924) 0:00:03.297 ********* 2025-06-05 19:37:00.058688 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058699 | orchestrator | 2025-06-05 19:37:00.058710 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-05 19:37:00.058721 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:01.704) 0:00:05.001 ********* 2025-06-05 19:37:00.058747 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-05 19:37:00.058758 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.058769 | orchestrator | 2025-06-05 19:37:00.058780 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-05 19:37:00.058791 | orchestrator | Thursday 05 June 2025 19:36:37 +0000 (0:00:37.908) 0:00:42.909 ********* 2025-06-05 19:37:00.058801 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058812 | orchestrator | 2025-06-05 19:37:00.058823 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-05 19:37:00.058834 | orchestrator | Thursday 05 June 2025 19:36:38 +0000 (0:00:01.111) 0:00:44.021 ********* 2025-06-05 19:37:00.058845 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.058856 | orchestrator | 2025-06-05 19:37:00.058866 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-05 19:37:00.058885 | orchestrator | Thursday 05 June 2025 19:36:39 +0000 (0:00:01.164) 0:00:45.185 ********* 2025-06-05 19:37:00.058896 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058907 | orchestrator | 2025-06-05 19:37:00.058917 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-05 19:37:00.058928 | orchestrator | Thursday 05 June 2025 19:36:42 +0000 (0:00:02.411) 0:00:47.597 ********* 2025-06-05 19:37:00.058939 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058950 | orchestrator | 2025-06-05 19:37:00.058961 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-05 19:37:00.058972 | orchestrator | Thursday 05 June 2025 19:36:43 +0000 (0:00:01.544) 0:00:49.142 ********* 2025-06-05 19:37:00.058983 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.058994 | orchestrator | 2025-06-05 19:37:00.059004 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-05 19:37:00.059021 | orchestrator | Thursday 05 June 2025 19:36:44 +0000 (0:00:01.039) 0:00:50.181 ********* 2025-06-05 19:37:00.059040 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.059060 | orchestrator | 2025-06-05 19:37:00.059080 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:37:00.059100 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.059119 | orchestrator | 2025-06-05 19:37:00.059130 | orchestrator | 2025-06-05 19:37:00.059141 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:37:00.059152 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:00.574) 0:00:50.756 ********* 2025-06-05 19:37:00.059163 | orchestrator | =============================================================================== 2025-06-05 19:37:00.059173 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.91s 2025-06-05 19:37:00.059184 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.41s 2025-06-05 19:37:00.059195 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.92s 2025-06-05 19:37:00.059206 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.70s 2025-06-05 19:37:00.059217 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.54s 2025-06-05 19:37:00.059228 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.16s 2025-06-05 19:37:00.059238 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.11s 2025-06-05 19:37:00.059249 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.04s 2025-06-05 19:37:00.059260 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.73s 2025-06-05 19:37:00.059270 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.57s 2025-06-05 19:37:00.059281 | orchestrator | 2025-06-05 19:37:00.059292 | orchestrator | 2025-06-05 19:37:00.059308 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:37:00.059319 | orchestrator | 2025-06-05 19:37:00.059330 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:37:00.059340 | orchestrator | Thursday 05 June 2025 19:35:56 +0000 (0:00:00.426) 0:00:00.426 ********* 2025-06-05 19:37:00.059351 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-05 19:37:00.059362 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-05 19:37:00.059372 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-05 19:37:00.059383 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-05 19:37:00.059394 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-05 19:37:00.059404 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-05 19:37:00.059415 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-05 19:37:00.059433 | orchestrator | 2025-06-05 19:37:00.059554 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-05 19:37:00.059568 | orchestrator | 2025-06-05 19:37:00.059579 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-05 19:37:00.059590 | orchestrator | Thursday 05 June 2025 19:35:58 +0000 (0:00:02.569) 0:00:02.995 ********* 2025-06-05 19:37:00.059615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:37:00.059634 | orchestrator | 2025-06-05 19:37:00.059646 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-05 19:37:00.059657 | orchestrator | Thursday 05 June 2025 19:36:01 +0000 (0:00:02.889) 0:00:05.885 ********* 2025-06-05 19:37:00.059668 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:37:00.059679 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.059690 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:37:00.059701 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:37:00.059712 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:37:00.059731 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:37:00.059742 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:37:00.059753 | orchestrator | 2025-06-05 19:37:00.059764 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-05 19:37:00.059775 | orchestrator | Thursday 05 June 2025 19:36:03 +0000 (0:00:02.285) 0:00:08.170 ********* 2025-06-05 19:37:00.059785 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.059796 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:37:00.059807 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:37:00.059818 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:37:00.059828 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:37:00.059839 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:37:00.059850 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:37:00.059862 | orchestrator | 2025-06-05 19:37:00.059881 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-05 19:37:00.059892 | orchestrator | Thursday 05 June 2025 19:36:07 +0000 (0:00:04.019) 0:00:12.191 ********* 2025-06-05 19:37:00.059903 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.059915 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:37:00.059925 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:37:00.059936 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:37:00.059947 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:37:00.059958 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:37:00.059968 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:37:00.059979 | orchestrator | 2025-06-05 19:37:00.059990 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-05 19:37:00.060001 | orchestrator | Thursday 05 June 2025 19:36:10 +0000 (0:00:02.690) 0:00:14.881 ********* 2025-06-05 19:37:00.060012 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.060023 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:37:00.060033 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:37:00.060044 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:37:00.060063 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:37:00.060081 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:37:00.060101 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:37:00.060122 | orchestrator | 2025-06-05 19:37:00.060140 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-05 19:37:00.060155 | orchestrator | Thursday 05 June 2025 19:36:20 +0000 (0:00:09.952) 0:00:24.834 ********* 2025-06-05 19:37:00.060166 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:37:00.060177 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:37:00.060188 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:37:00.060199 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:37:00.060209 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:37:00.060220 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:37:00.060248 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.060259 | orchestrator | 2025-06-05 19:37:00.060270 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-05 19:37:00.060281 | orchestrator | Thursday 05 June 2025 19:36:39 +0000 (0:00:18.578) 0:00:43.412 ********* 2025-06-05 19:37:00.060293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:37:00.060306 | orchestrator | 2025-06-05 19:37:00.060317 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-05 19:37:00.060328 | orchestrator | Thursday 05 June 2025 19:36:41 +0000 (0:00:02.208) 0:00:45.621 ********* 2025-06-05 19:37:00.060339 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-05 19:37:00.060350 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-05 19:37:00.060361 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-05 19:37:00.060371 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-05 19:37:00.060382 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-05 19:37:00.060393 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-05 19:37:00.060404 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-05 19:37:00.060415 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-05 19:37:00.060426 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-05 19:37:00.060509 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-05 19:37:00.060523 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-05 19:37:00.060534 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-05 19:37:00.060545 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-05 19:37:00.060556 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-05 19:37:00.060567 | orchestrator | 2025-06-05 19:37:00.060578 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-05 19:37:00.060589 | orchestrator | Thursday 05 June 2025 19:36:46 +0000 (0:00:05.237) 0:00:50.859 ********* 2025-06-05 19:37:00.060600 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.060611 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:37:00.060622 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:37:00.060633 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:37:00.060644 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:37:00.060655 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:37:00.060665 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:37:00.060676 | orchestrator | 2025-06-05 19:37:00.060687 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-05 19:37:00.060698 | orchestrator | Thursday 05 June 2025 19:36:47 +0000 (0:00:01.263) 0:00:52.122 ********* 2025-06-05 19:37:00.060709 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.060720 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:37:00.060731 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:37:00.060742 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:37:00.060753 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:37:00.060764 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:37:00.060775 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:37:00.060786 | orchestrator | 2025-06-05 19:37:00.060797 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-05 19:37:00.060817 | orchestrator | Thursday 05 June 2025 19:36:49 +0000 (0:00:01.885) 0:00:54.008 ********* 2025-06-05 19:37:00.060829 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.060840 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:37:00.060851 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:37:00.060862 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:37:00.060873 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:37:00.060884 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:37:00.060902 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:37:00.060913 | orchestrator | 2025-06-05 19:37:00.060924 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-05 19:37:00.060936 | orchestrator | Thursday 05 June 2025 19:36:51 +0000 (0:00:01.430) 0:00:55.438 ********* 2025-06-05 19:37:00.060947 | orchestrator | ok: [testbed-manager] 2025-06-05 19:37:00.060958 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:37:00.060968 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:37:00.060979 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:37:00.060990 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:37:00.061001 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:37:00.061011 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:37:00.061022 | orchestrator | 2025-06-05 19:37:00.061033 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-05 19:37:00.061044 | orchestrator | Thursday 05 June 2025 19:36:53 +0000 (0:00:02.076) 0:00:57.515 ********* 2025-06-05 19:37:00.061056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-05 19:37:00.061068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:37:00.061080 | orchestrator | 2025-06-05 19:37:00.061100 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-05 19:37:00.061118 | orchestrator | Thursday 05 June 2025 19:36:54 +0000 (0:00:01.355) 0:00:58.870 ********* 2025-06-05 19:37:00.061138 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.061158 | orchestrator | 2025-06-05 19:37:00.061177 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-05 19:37:00.061194 | orchestrator | Thursday 05 June 2025 19:36:56 +0000 (0:00:01.765) 0:01:00.636 ********* 2025-06-05 19:37:00.061205 | orchestrator | changed: [testbed-manager] 2025-06-05 19:37:00.061216 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:37:00.061227 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:37:00.061237 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:37:00.061248 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:37:00.061259 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:37:00.061270 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:37:00.061281 | orchestrator | 2025-06-05 19:37:00.061291 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:37:00.061302 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061314 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061325 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061336 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061347 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061367 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061383 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:37:00.061394 | orchestrator | 2025-06-05 19:37:00.061405 | orchestrator | 2025-06-05 19:37:00.061416 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:37:00.061427 | orchestrator | Thursday 05 June 2025 19:36:59 +0000 (0:00:03.096) 0:01:03.733 ********* 2025-06-05 19:37:00.061477 | orchestrator | =============================================================================== 2025-06-05 19:37:00.061489 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.58s 2025-06-05 19:37:00.061500 | orchestrator | osism.services.netdata : Add repository --------------------------------- 9.95s 2025-06-05 19:37:00.061511 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.24s 2025-06-05 19:37:00.061522 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.02s 2025-06-05 19:37:00.061533 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.10s 2025-06-05 19:37:00.061544 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.89s 2025-06-05 19:37:00.061554 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.69s 2025-06-05 19:37:00.061565 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.57s 2025-06-05 19:37:00.061576 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.29s 2025-06-05 19:37:00.061587 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 2.21s 2025-06-05 19:37:00.061598 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.08s 2025-06-05 19:37:00.061616 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.89s 2025-06-05 19:37:00.061627 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.77s 2025-06-05 19:37:00.061638 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.43s 2025-06-05 19:37:00.061649 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.36s 2025-06-05 19:37:00.061660 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.26s 2025-06-05 19:37:00.061704 | orchestrator | 2025-06-05 19:37:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:03.090908 | orchestrator | 2025-06-05 19:37:03 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:03.091734 | orchestrator | 2025-06-05 19:37:03 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:03.093563 | orchestrator | 2025-06-05 19:37:03 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:03.093589 | orchestrator | 2025-06-05 19:37:03 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:03.093602 | orchestrator | 2025-06-05 19:37:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:06.127516 | orchestrator | 2025-06-05 19:37:06 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:06.127750 | orchestrator | 2025-06-05 19:37:06 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:06.128784 | orchestrator | 2025-06-05 19:37:06 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:06.129199 | orchestrator | 2025-06-05 19:37:06 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:06.129222 | orchestrator | 2025-06-05 19:37:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:09.179596 | orchestrator | 2025-06-05 19:37:09 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:09.179854 | orchestrator | 2025-06-05 19:37:09 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:09.181021 | orchestrator | 2025-06-05 19:37:09 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:09.184229 | orchestrator | 2025-06-05 19:37:09 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:09.184291 | orchestrator | 2025-06-05 19:37:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:12.224621 | orchestrator | 2025-06-05 19:37:12 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:12.226670 | orchestrator | 2025-06-05 19:37:12 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:12.227973 | orchestrator | 2025-06-05 19:37:12 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:12.230652 | orchestrator | 2025-06-05 19:37:12 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:12.230684 | orchestrator | 2025-06-05 19:37:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:15.282995 | orchestrator | 2025-06-05 19:37:15 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:15.287187 | orchestrator | 2025-06-05 19:37:15 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state STARTED 2025-06-05 19:37:15.287226 | orchestrator | 2025-06-05 19:37:15 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:15.287239 | orchestrator | 2025-06-05 19:37:15 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:15.287251 | orchestrator | 2025-06-05 19:37:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:18.334803 | orchestrator | 2025-06-05 19:37:18 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:18.335649 | orchestrator | 2025-06-05 19:37:18 | INFO  | Task 42d7c678-937e-4ca7-adab-c76584383d75 is in state SUCCESS 2025-06-05 19:37:18.338294 | orchestrator | 2025-06-05 19:37:18 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:18.340482 | orchestrator | 2025-06-05 19:37:18 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:18.340635 | orchestrator | 2025-06-05 19:37:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:21.389876 | orchestrator | 2025-06-05 19:37:21 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:21.390798 | orchestrator | 2025-06-05 19:37:21 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:21.392066 | orchestrator | 2025-06-05 19:37:21 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:21.392102 | orchestrator | 2025-06-05 19:37:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:24.421818 | orchestrator | 2025-06-05 19:37:24 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:24.424390 | orchestrator | 2025-06-05 19:37:24 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:24.425762 | orchestrator | 2025-06-05 19:37:24 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:24.425797 | orchestrator | 2025-06-05 19:37:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:27.472553 | orchestrator | 2025-06-05 19:37:27 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:27.473286 | orchestrator | 2025-06-05 19:37:27 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:27.478264 | orchestrator | 2025-06-05 19:37:27 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:27.479680 | orchestrator | 2025-06-05 19:37:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:30.527751 | orchestrator | 2025-06-05 19:37:30 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:30.528638 | orchestrator | 2025-06-05 19:37:30 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:30.530394 | orchestrator | 2025-06-05 19:37:30 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:30.530496 | orchestrator | 2025-06-05 19:37:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:33.567387 | orchestrator | 2025-06-05 19:37:33 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:33.568600 | orchestrator | 2025-06-05 19:37:33 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:33.570479 | orchestrator | 2025-06-05 19:37:33 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:33.570514 | orchestrator | 2025-06-05 19:37:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:36.621497 | orchestrator | 2025-06-05 19:37:36 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:36.623297 | orchestrator | 2025-06-05 19:37:36 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:36.625591 | orchestrator | 2025-06-05 19:37:36 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:36.625621 | orchestrator | 2025-06-05 19:37:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:39.675854 | orchestrator | 2025-06-05 19:37:39 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:39.677268 | orchestrator | 2025-06-05 19:37:39 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:39.684615 | orchestrator | 2025-06-05 19:37:39 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:39.684679 | orchestrator | 2025-06-05 19:37:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:42.733743 | orchestrator | 2025-06-05 19:37:42 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:42.735088 | orchestrator | 2025-06-05 19:37:42 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:42.736824 | orchestrator | 2025-06-05 19:37:42 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:42.736854 | orchestrator | 2025-06-05 19:37:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:45.791174 | orchestrator | 2025-06-05 19:37:45 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:45.796932 | orchestrator | 2025-06-05 19:37:45 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:45.798535 | orchestrator | 2025-06-05 19:37:45 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:45.799121 | orchestrator | 2025-06-05 19:37:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:48.860948 | orchestrator | 2025-06-05 19:37:48 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:48.862622 | orchestrator | 2025-06-05 19:37:48 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:48.863851 | orchestrator | 2025-06-05 19:37:48 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:48.865198 | orchestrator | 2025-06-05 19:37:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:51.910210 | orchestrator | 2025-06-05 19:37:51 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:51.912283 | orchestrator | 2025-06-05 19:37:51 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:51.915027 | orchestrator | 2025-06-05 19:37:51 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:51.915064 | orchestrator | 2025-06-05 19:37:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:54.970814 | orchestrator | 2025-06-05 19:37:54 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:54.970925 | orchestrator | 2025-06-05 19:37:54 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:54.970941 | orchestrator | 2025-06-05 19:37:54 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:54.970953 | orchestrator | 2025-06-05 19:37:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:37:58.018906 | orchestrator | 2025-06-05 19:37:58 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:37:58.020081 | orchestrator | 2025-06-05 19:37:58 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:37:58.022164 | orchestrator | 2025-06-05 19:37:58 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:37:58.022197 | orchestrator | 2025-06-05 19:37:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:01.065971 | orchestrator | 2025-06-05 19:38:01 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:01.068610 | orchestrator | 2025-06-05 19:38:01 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:01.070213 | orchestrator | 2025-06-05 19:38:01 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:38:01.070336 | orchestrator | 2025-06-05 19:38:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:04.112609 | orchestrator | 2025-06-05 19:38:04 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:04.113072 | orchestrator | 2025-06-05 19:38:04 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:04.113904 | orchestrator | 2025-06-05 19:38:04 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:38:04.113932 | orchestrator | 2025-06-05 19:38:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:07.153934 | orchestrator | 2025-06-05 19:38:07 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:07.154084 | orchestrator | 2025-06-05 19:38:07 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:07.154264 | orchestrator | 2025-06-05 19:38:07 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:38:07.154660 | orchestrator | 2025-06-05 19:38:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:10.203112 | orchestrator | 2025-06-05 19:38:10 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:10.204573 | orchestrator | 2025-06-05 19:38:10 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:10.206308 | orchestrator | 2025-06-05 19:38:10 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:38:10.206473 | orchestrator | 2025-06-05 19:38:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:13.254913 | orchestrator | 2025-06-05 19:38:13 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:13.256473 | orchestrator | 2025-06-05 19:38:13 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:13.257906 | orchestrator | 2025-06-05 19:38:13 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state STARTED 2025-06-05 19:38:13.258090 | orchestrator | 2025-06-05 19:38:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:16.304070 | orchestrator | 2025-06-05 19:38:16 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:16.305096 | orchestrator | 2025-06-05 19:38:16 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:16.308554 | orchestrator | 2025-06-05 19:38:16.308592 | orchestrator | 2025-06-05 19:38:16.308605 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-05 19:38:16.308617 | orchestrator | 2025-06-05 19:38:16.308629 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-05 19:38:16.308641 | orchestrator | Thursday 05 June 2025 19:36:17 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-06-05 19:38:16.308652 | orchestrator | ok: [testbed-manager] 2025-06-05 19:38:16.308664 | orchestrator | 2025-06-05 19:38:16.308675 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-05 19:38:16.308686 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.775) 0:00:01.031 ********* 2025-06-05 19:38:16.308697 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-05 19:38:16.308708 | orchestrator | 2025-06-05 19:38:16.308719 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-05 19:38:16.308730 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.562) 0:00:01.594 ********* 2025-06-05 19:38:16.308741 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.308752 | orchestrator | 2025-06-05 19:38:16.308763 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-05 19:38:16.308774 | orchestrator | Thursday 05 June 2025 19:36:20 +0000 (0:00:01.352) 0:00:02.946 ********* 2025-06-05 19:38:16.308785 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-05 19:38:16.308795 | orchestrator | ok: [testbed-manager] 2025-06-05 19:38:16.308806 | orchestrator | 2025-06-05 19:38:16.308817 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-05 19:38:16.308827 | orchestrator | Thursday 05 June 2025 19:37:14 +0000 (0:00:54.075) 0:00:57.021 ********* 2025-06-05 19:38:16.308838 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.308849 | orchestrator | 2025-06-05 19:38:16.308859 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:38:16.308871 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:16.308884 | orchestrator | 2025-06-05 19:38:16.308896 | orchestrator | 2025-06-05 19:38:16.308907 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:38:16.308918 | orchestrator | Thursday 05 June 2025 19:37:17 +0000 (0:00:03.385) 0:01:00.407 ********* 2025-06-05 19:38:16.308928 | orchestrator | =============================================================================== 2025-06-05 19:38:16.308939 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 54.08s 2025-06-05 19:38:16.308950 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.39s 2025-06-05 19:38:16.308960 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.35s 2025-06-05 19:38:16.308971 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.78s 2025-06-05 19:38:16.308982 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.56s 2025-06-05 19:38:16.308993 | orchestrator | 2025-06-05 19:38:16.309028 | orchestrator | 2025-06-05 19:38:16 | INFO  | Task 26074737-61ba-4053-bb66-60fd82dd4f4d is in state SUCCESS 2025-06-05 19:38:16.311704 | orchestrator | 2025-06-05 19:38:16.311754 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-05 19:38:16.311767 | orchestrator | 2025-06-05 19:38:16.311778 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-05 19:38:16.311809 | orchestrator | Thursday 05 June 2025 19:35:49 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-06-05 19:38:16.311827 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:38:16.311840 | orchestrator | 2025-06-05 19:38:16.311851 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-05 19:38:16.311862 | orchestrator | Thursday 05 June 2025 19:35:50 +0000 (0:00:01.253) 0:00:01.509 ********* 2025-06-05 19:38:16.311873 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311884 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311895 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311906 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311917 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.311928 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.311939 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.311950 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311961 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.311972 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.311983 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.311995 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312006 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-05 19:38:16.312017 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312028 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.312038 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312050 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.312060 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-05 19:38:16.312071 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312082 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312093 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-05 19:38:16.312104 | orchestrator | 2025-06-05 19:38:16.312115 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-05 19:38:16.312126 | orchestrator | Thursday 05 June 2025 19:35:54 +0000 (0:00:04.348) 0:00:05.858 ********* 2025-06-05 19:38:16.312137 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:38:16.312150 | orchestrator | 2025-06-05 19:38:16.312163 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-05 19:38:16.312176 | orchestrator | Thursday 05 June 2025 19:35:56 +0000 (0:00:01.585) 0:00:07.443 ********* 2025-06-05 19:38:16.312193 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312287 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312401 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312421 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.312459 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312517 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312546 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.312593 | orchestrator | 2025-06-05 19:38:16.312604 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-05 19:38:16.312616 | orchestrator | Thursday 05 June 2025 19:36:01 +0000 (0:00:05.524) 0:00:12.968 ********* 2025-06-05 19:38:16.312628 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312657 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312715 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:38:16.312727 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:38:16.312738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312779 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:38:16.312791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312802 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312836 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:38:16.312847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312871 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312888 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:38:16.312899 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312911 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312922 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312933 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:38:16.312957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.312973 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312985 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.312996 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:38:16.313007 | orchestrator | 2025-06-05 19:38:16.313019 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-05 19:38:16.313030 | orchestrator | Thursday 05 June 2025 19:36:03 +0000 (0:00:01.468) 0:00:14.436 ********* 2025-06-05 19:38:16.313041 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313059 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313071 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313082 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:38:16.313093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313139 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:38:16.313150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313190 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:38:16.313202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313236 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:38:16.313306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313413 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:38:16.313426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313449 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313460 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:38:16.313472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-05 19:38:16.313496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.313524 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:38:16.313535 | orchestrator | 2025-06-05 19:38:16.313546 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-05 19:38:16.313558 | orchestrator | Thursday 05 June 2025 19:36:05 +0000 (0:00:02.349) 0:00:16.785 ********* 2025-06-05 19:38:16.313579 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:38:16.313590 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:38:16.313601 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:38:16.313612 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:38:16.313622 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:38:16.313631 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:38:16.313641 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:38:16.313651 | orchestrator | 2025-06-05 19:38:16.313660 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-05 19:38:16.313670 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:00.907) 0:00:17.693 ********* 2025-06-05 19:38:16.313680 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:38:16.313690 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:38:16.313699 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:38:16.313709 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:38:16.313718 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:38:16.313728 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:38:16.313737 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:38:16.313747 | orchestrator | 2025-06-05 19:38:16.313757 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-05 19:38:16.313767 | orchestrator | Thursday 05 June 2025 19:36:07 +0000 (0:00:01.058) 0:00:18.752 ********* 2025-06-05 19:38:16.313776 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313808 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313848 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313869 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313901 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.313915 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313988 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.313999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.314111 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.314132 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.314143 | orchestrator | 2025-06-05 19:38:16.314153 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-05 19:38:16.314163 | orchestrator | Thursday 05 June 2025 19:36:13 +0000 (0:00:06.039) 0:00:24.791 ********* 2025-06-05 19:38:16.314173 | orchestrator | [WARNING]: Skipped 2025-06-05 19:38:16.314183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-05 19:38:16.314193 | orchestrator | to this access issue: 2025-06-05 19:38:16.314202 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-05 19:38:16.314212 | orchestrator | directory 2025-06-05 19:38:16.314221 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:38:16.314231 | orchestrator | 2025-06-05 19:38:16.314241 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-05 19:38:16.314250 | orchestrator | Thursday 05 June 2025 19:36:15 +0000 (0:00:01.784) 0:00:26.575 ********* 2025-06-05 19:38:16.314260 | orchestrator | [WARNING]: Skipped 2025-06-05 19:38:16.314270 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-05 19:38:16.314279 | orchestrator | to this access issue: 2025-06-05 19:38:16.314289 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-05 19:38:16.314298 | orchestrator | directory 2025-06-05 19:38:16.314308 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:38:16.314317 | orchestrator | 2025-06-05 19:38:16.314327 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-05 19:38:16.314396 | orchestrator | Thursday 05 June 2025 19:36:16 +0000 (0:00:00.944) 0:00:27.520 ********* 2025-06-05 19:38:16.314415 | orchestrator | [WARNING]: Skipped 2025-06-05 19:38:16.314429 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-05 19:38:16.314439 | orchestrator | to this access issue: 2025-06-05 19:38:16.314449 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-05 19:38:16.314459 | orchestrator | directory 2025-06-05 19:38:16.314471 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:38:16.314487 | orchestrator | 2025-06-05 19:38:16.314501 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-05 19:38:16.314511 | orchestrator | Thursday 05 June 2025 19:36:17 +0000 (0:00:00.889) 0:00:28.409 ********* 2025-06-05 19:38:16.314521 | orchestrator | [WARNING]: Skipped 2025-06-05 19:38:16.314531 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-05 19:38:16.314540 | orchestrator | to this access issue: 2025-06-05 19:38:16.314550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-05 19:38:16.314559 | orchestrator | directory 2025-06-05 19:38:16.314574 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:38:16.314589 | orchestrator | 2025-06-05 19:38:16.314605 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-05 19:38:16.314623 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.930) 0:00:29.340 ********* 2025-06-05 19:38:16.314650 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.314661 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.314670 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.314680 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.314689 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.314699 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.314709 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.314718 | orchestrator | 2025-06-05 19:38:16.314726 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-05 19:38:16.314734 | orchestrator | Thursday 05 June 2025 19:36:23 +0000 (0:00:05.345) 0:00:34.686 ********* 2025-06-05 19:38:16.314742 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314750 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314758 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314774 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314782 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314790 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-05 19:38:16.314797 | orchestrator | 2025-06-05 19:38:16.314805 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-05 19:38:16.314813 | orchestrator | Thursday 05 June 2025 19:36:26 +0000 (0:00:02.617) 0:00:37.303 ********* 2025-06-05 19:38:16.314827 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.314839 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.314852 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.314864 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.314883 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.314896 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.314910 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.314921 | orchestrator | 2025-06-05 19:38:16.314929 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-05 19:38:16.314937 | orchestrator | Thursday 05 June 2025 19:36:28 +0000 (0:00:02.508) 0:00:39.812 ********* 2025-06-05 19:38:16.314950 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.314960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.314969 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.314983 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.314993 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315013 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315022 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.315049 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.315071 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315079 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315087 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.315112 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315130 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.315167 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:38:16.315190 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315199 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315207 | orchestrator | 2025-06-05 19:38:16.315215 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-05 19:38:16.315223 | orchestrator | Thursday 05 June 2025 19:36:32 +0000 (0:00:03.910) 0:00:43.722 ********* 2025-06-05 19:38:16.315231 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315247 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315255 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315263 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315270 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315278 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-05 19:38:16.315286 | orchestrator | 2025-06-05 19:38:16.315299 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-05 19:38:16.315308 | orchestrator | Thursday 05 June 2025 19:36:34 +0000 (0:00:02.161) 0:00:45.884 ********* 2025-06-05 19:38:16.315316 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315372 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315387 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315395 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-05 19:38:16.315403 | orchestrator | 2025-06-05 19:38:16.315411 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-05 19:38:16.315419 | orchestrator | Thursday 05 June 2025 19:36:37 +0000 (0:00:02.828) 0:00:48.713 ********* 2025-06-05 19:38:16.315429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315453 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315511 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315536 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-05 19:38:16.315544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315570 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315584 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315601 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315610 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315635 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:38:16.315648 | orchestrator | 2025-06-05 19:38:16.315656 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-05 19:38:16.315664 | orchestrator | Thursday 05 June 2025 19:36:41 +0000 (0:00:04.168) 0:00:52.881 ********* 2025-06-05 19:38:16.315676 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.315685 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.315693 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.315700 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.315708 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.315716 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.315724 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.315732 | orchestrator | 2025-06-05 19:38:16.315740 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-05 19:38:16.315752 | orchestrator | Thursday 05 June 2025 19:36:43 +0000 (0:00:02.143) 0:00:55.025 ********* 2025-06-05 19:38:16.315760 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.315768 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.315776 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.315783 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.315791 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.315799 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.315807 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.315815 | orchestrator | 2025-06-05 19:38:16.315823 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315831 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:01.562) 0:00:56.587 ********* 2025-06-05 19:38:16.315839 | orchestrator | 2025-06-05 19:38:16.315847 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315854 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:00.224) 0:00:56.812 ********* 2025-06-05 19:38:16.315863 | orchestrator | 2025-06-05 19:38:16.315871 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315878 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:00.068) 0:00:56.880 ********* 2025-06-05 19:38:16.315886 | orchestrator | 2025-06-05 19:38:16.315894 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315902 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:00.089) 0:00:56.970 ********* 2025-06-05 19:38:16.315910 | orchestrator | 2025-06-05 19:38:16.315918 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315926 | orchestrator | Thursday 05 June 2025 19:36:45 +0000 (0:00:00.061) 0:00:57.032 ********* 2025-06-05 19:38:16.315934 | orchestrator | 2025-06-05 19:38:16.315942 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315950 | orchestrator | Thursday 05 June 2025 19:36:46 +0000 (0:00:00.062) 0:00:57.095 ********* 2025-06-05 19:38:16.315958 | orchestrator | 2025-06-05 19:38:16.315966 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-05 19:38:16.315974 | orchestrator | Thursday 05 June 2025 19:36:46 +0000 (0:00:00.066) 0:00:57.161 ********* 2025-06-05 19:38:16.315981 | orchestrator | 2025-06-05 19:38:16.315989 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-05 19:38:16.315997 | orchestrator | Thursday 05 June 2025 19:36:46 +0000 (0:00:00.080) 0:00:57.242 ********* 2025-06-05 19:38:16.316005 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.316013 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.316021 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.316029 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.316037 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.316044 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.316052 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.316060 | orchestrator | 2025-06-05 19:38:16.316068 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-05 19:38:16.316081 | orchestrator | Thursday 05 June 2025 19:37:25 +0000 (0:00:39.808) 0:01:37.051 ********* 2025-06-05 19:38:16.316089 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.316096 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.316104 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.316112 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.316120 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.316128 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.316136 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.316144 | orchestrator | 2025-06-05 19:38:16.316151 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-05 19:38:16.316159 | orchestrator | Thursday 05 June 2025 19:38:08 +0000 (0:00:42.371) 0:02:19.422 ********* 2025-06-05 19:38:16.316167 | orchestrator | ok: [testbed-manager] 2025-06-05 19:38:16.316175 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:38:16.316183 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:38:16.316191 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:38:16.316199 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:38:16.316207 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:38:16.316214 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:38:16.316222 | orchestrator | 2025-06-05 19:38:16.316230 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-05 19:38:16.316238 | orchestrator | Thursday 05 June 2025 19:38:10 +0000 (0:00:01.876) 0:02:21.299 ********* 2025-06-05 19:38:16.316246 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:16.316254 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:16.316262 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:38:16.316270 | orchestrator | changed: [testbed-manager] 2025-06-05 19:38:16.316278 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:16.316286 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:38:16.316294 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:38:16.316302 | orchestrator | 2025-06-05 19:38:16.316309 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:38:16.316319 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316327 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316357 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316371 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316380 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316391 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316400 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-05 19:38:16.316408 | orchestrator | 2025-06-05 19:38:16.316416 | orchestrator | 2025-06-05 19:38:16.316424 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:38:16.316432 | orchestrator | Thursday 05 June 2025 19:38:14 +0000 (0:00:04.650) 0:02:25.949 ********* 2025-06-05 19:38:16.316440 | orchestrator | =============================================================================== 2025-06-05 19:38:16.316448 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 42.37s 2025-06-05 19:38:16.316456 | orchestrator | common : Restart fluentd container ------------------------------------- 39.81s 2025-06-05 19:38:16.316464 | orchestrator | common : Copying over config.json files for services -------------------- 6.04s 2025-06-05 19:38:16.316478 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.52s 2025-06-05 19:38:16.316486 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.35s 2025-06-05 19:38:16.316494 | orchestrator | common : Restart cron container ----------------------------------------- 4.65s 2025-06-05 19:38:16.316502 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.35s 2025-06-05 19:38:16.316510 | orchestrator | common : Check common containers ---------------------------------------- 4.17s 2025-06-05 19:38:16.316517 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.91s 2025-06-05 19:38:16.316525 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.83s 2025-06-05 19:38:16.316533 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.62s 2025-06-05 19:38:16.316541 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.51s 2025-06-05 19:38:16.316549 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.35s 2025-06-05 19:38:16.316557 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.16s 2025-06-05 19:38:16.316565 | orchestrator | common : Creating log volume -------------------------------------------- 2.14s 2025-06-05 19:38:16.316573 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.88s 2025-06-05 19:38:16.316581 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.78s 2025-06-05 19:38:16.316589 | orchestrator | common : include_tasks -------------------------------------------------- 1.59s 2025-06-05 19:38:16.316597 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.56s 2025-06-05 19:38:16.316605 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.47s 2025-06-05 19:38:16.316613 | orchestrator | 2025-06-05 19:38:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:19.359428 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:19.359524 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:19.359539 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:19.361318 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:19.367246 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:19.367272 | orchestrator | 2025-06-05 19:38:19 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:19.367284 | orchestrator | 2025-06-05 19:38:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:22.390471 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:22.390562 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:22.390915 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:22.391455 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:22.392139 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:22.392589 | orchestrator | 2025-06-05 19:38:22 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:22.393824 | orchestrator | 2025-06-05 19:38:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:25.434992 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:25.435085 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:25.435116 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:25.435635 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:25.436182 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:25.438990 | orchestrator | 2025-06-05 19:38:25 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:25.439017 | orchestrator | 2025-06-05 19:38:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:28.470251 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:28.470491 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:28.476356 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:28.476930 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:28.477409 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:28.478678 | orchestrator | 2025-06-05 19:38:28 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:28.478703 | orchestrator | 2025-06-05 19:38:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:31.523565 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:31.525170 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:31.525204 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:31.526916 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:31.527697 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:31.529215 | orchestrator | 2025-06-05 19:38:31 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:31.529276 | orchestrator | 2025-06-05 19:38:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:34.586950 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:34.590189 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state STARTED 2025-06-05 19:38:34.593282 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:34.593370 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:34.593384 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:34.593396 | orchestrator | 2025-06-05 19:38:34 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:34.593408 | orchestrator | 2025-06-05 19:38:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:37.618620 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:37.619139 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:37.619595 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task 72263ed0-2d90-4ff8-9b2d-5b7b90fb7bd1 is in state SUCCESS 2025-06-05 19:38:37.621713 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:37.622237 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:37.623521 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:37.626484 | orchestrator | 2025-06-05 19:38:37 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:37.626507 | orchestrator | 2025-06-05 19:38:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:40.662556 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:40.665160 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:40.666190 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:40.667406 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:40.668176 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:40.668999 | orchestrator | 2025-06-05 19:38:40 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:40.669027 | orchestrator | 2025-06-05 19:38:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:43.721827 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:43.725437 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:43.729193 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:43.729238 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:43.729250 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:43.729621 | orchestrator | 2025-06-05 19:38:43 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:43.729643 | orchestrator | 2025-06-05 19:38:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:46.756657 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:46.758201 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:46.759388 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:46.760133 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:46.760665 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:46.761557 | orchestrator | 2025-06-05 19:38:46 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state STARTED 2025-06-05 19:38:46.761695 | orchestrator | 2025-06-05 19:38:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:49.804373 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:49.804439 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:49.804884 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:49.805676 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:49.806376 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:49.808864 | orchestrator | 2025-06-05 19:38:49.808896 | orchestrator | 2025-06-05 19:38:49.808906 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:38:49.808914 | orchestrator | 2025-06-05 19:38:49.808922 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:38:49.808930 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.328) 0:00:00.328 ********* 2025-06-05 19:38:49.808938 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:38:49.808947 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:38:49.808955 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:38:49.808963 | orchestrator | 2025-06-05 19:38:49.808971 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:38:49.808979 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.334) 0:00:00.663 ********* 2025-06-05 19:38:49.808987 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-05 19:38:49.808995 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-05 19:38:49.809003 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-05 19:38:49.809011 | orchestrator | 2025-06-05 19:38:49.809019 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-05 19:38:49.809026 | orchestrator | 2025-06-05 19:38:49.809034 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-05 19:38:49.809042 | orchestrator | Thursday 05 June 2025 19:38:21 +0000 (0:00:00.446) 0:00:01.110 ********* 2025-06-05 19:38:49.809050 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:38:49.809058 | orchestrator | 2025-06-05 19:38:49.809066 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-05 19:38:49.809079 | orchestrator | Thursday 05 June 2025 19:38:22 +0000 (0:00:00.599) 0:00:01.709 ********* 2025-06-05 19:38:49.809097 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-05 19:38:49.809109 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-05 19:38:49.809121 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-05 19:38:49.809132 | orchestrator | 2025-06-05 19:38:49.809145 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-05 19:38:49.809158 | orchestrator | Thursday 05 June 2025 19:38:22 +0000 (0:00:00.776) 0:00:02.486 ********* 2025-06-05 19:38:49.809166 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-05 19:38:49.809174 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-05 19:38:49.809182 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-05 19:38:49.809190 | orchestrator | 2025-06-05 19:38:49.809198 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-05 19:38:49.809206 | orchestrator | Thursday 05 June 2025 19:38:24 +0000 (0:00:01.762) 0:00:04.248 ********* 2025-06-05 19:38:49.809213 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:49.809221 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:49.809229 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:49.809237 | orchestrator | 2025-06-05 19:38:49.809246 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-05 19:38:49.809267 | orchestrator | Thursday 05 June 2025 19:38:26 +0000 (0:00:02.417) 0:00:06.666 ********* 2025-06-05 19:38:49.809276 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:49.809284 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:49.809317 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:49.809325 | orchestrator | 2025-06-05 19:38:49.809333 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:38:49.809341 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.809350 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.809359 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.809367 | orchestrator | 2025-06-05 19:38:49.809375 | orchestrator | 2025-06-05 19:38:49.809383 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:38:49.809391 | orchestrator | Thursday 05 June 2025 19:38:35 +0000 (0:00:08.477) 0:00:15.143 ********* 2025-06-05 19:38:49.809399 | orchestrator | =============================================================================== 2025-06-05 19:38:49.809407 | orchestrator | memcached : Restart memcached container --------------------------------- 8.48s 2025-06-05 19:38:49.809415 | orchestrator | memcached : Check memcached container ----------------------------------- 2.42s 2025-06-05 19:38:49.809423 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.76s 2025-06-05 19:38:49.809431 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.78s 2025-06-05 19:38:49.809439 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.60s 2025-06-05 19:38:49.809447 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-06-05 19:38:49.809455 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-05 19:38:49.809463 | orchestrator | 2025-06-05 19:38:49.809471 | orchestrator | 2025-06-05 19:38:49.809480 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:38:49.809489 | orchestrator | 2025-06-05 19:38:49.809498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:38:49.809507 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.341) 0:00:00.341 ********* 2025-06-05 19:38:49.809516 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:38:49.809525 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:38:49.809534 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:38:49.809543 | orchestrator | 2025-06-05 19:38:49.809552 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:38:49.809569 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.393) 0:00:00.735 ********* 2025-06-05 19:38:49.809578 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-05 19:38:49.809587 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-05 19:38:49.809597 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-05 19:38:49.809606 | orchestrator | 2025-06-05 19:38:49.809615 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-05 19:38:49.809624 | orchestrator | 2025-06-05 19:38:49.809633 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-05 19:38:49.809642 | orchestrator | Thursday 05 June 2025 19:38:21 +0000 (0:00:00.613) 0:00:01.349 ********* 2025-06-05 19:38:49.809651 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:38:49.809660 | orchestrator | 2025-06-05 19:38:49.809669 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-05 19:38:49.809678 | orchestrator | Thursday 05 June 2025 19:38:22 +0000 (0:00:00.743) 0:00:02.092 ********* 2025-06-05 19:38:49.809698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809769 | orchestrator | 2025-06-05 19:38:49.809779 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-05 19:38:49.809792 | orchestrator | Thursday 05 June 2025 19:38:23 +0000 (0:00:01.258) 0:00:03.350 ********* 2025-06-05 19:38:49.809800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809865 | orchestrator | 2025-06-05 19:38:49.809874 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-05 19:38:49.809882 | orchestrator | Thursday 05 June 2025 19:38:26 +0000 (0:00:03.161) 0:00:06.511 ********* 2025-06-05 19:38:49.809890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809954 | orchestrator | 2025-06-05 19:38:49.809963 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-05 19:38:49.809971 | orchestrator | Thursday 05 June 2025 19:38:30 +0000 (0:00:03.338) 0:00:09.850 ********* 2025-06-05 19:38:49.809979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.809991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.810000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.810008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.810102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.810128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-05 19:38:49.810146 | orchestrator | 2025-06-05 19:38:49.810155 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-05 19:38:49.810163 | orchestrator | Thursday 05 June 2025 19:38:32 +0000 (0:00:02.191) 0:00:12.042 ********* 2025-06-05 19:38:49.810171 | orchestrator | 2025-06-05 19:38:49.810179 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-05 19:38:49.810187 | orchestrator | Thursday 05 June 2025 19:38:32 +0000 (0:00:00.237) 0:00:12.280 ********* 2025-06-05 19:38:49.810195 | orchestrator | 2025-06-05 19:38:49.810203 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-05 19:38:49.810210 | orchestrator | Thursday 05 June 2025 19:38:32 +0000 (0:00:00.334) 0:00:12.614 ********* 2025-06-05 19:38:49.810218 | orchestrator | 2025-06-05 19:38:49.810226 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-05 19:38:49.810234 | orchestrator | Thursday 05 June 2025 19:38:33 +0000 (0:00:00.279) 0:00:12.894 ********* 2025-06-05 19:38:49.810242 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:49.810250 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:49.810258 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:49.810266 | orchestrator | 2025-06-05 19:38:49.810274 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-05 19:38:49.810281 | orchestrator | Thursday 05 June 2025 19:38:37 +0000 (0:00:04.666) 0:00:17.560 ********* 2025-06-05 19:38:49.810302 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:38:49.810311 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:38:49.810319 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:38:49.810327 | orchestrator | 2025-06-05 19:38:49.810339 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:38:49.810347 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.810356 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.810364 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:38:49.810372 | orchestrator | 2025-06-05 19:38:49.810379 | orchestrator | 2025-06-05 19:38:49.810387 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:38:49.810395 | orchestrator | Thursday 05 June 2025 19:38:46 +0000 (0:00:08.753) 0:00:26.313 ********* 2025-06-05 19:38:49.810403 | orchestrator | =============================================================================== 2025-06-05 19:38:49.810411 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.75s 2025-06-05 19:38:49.810419 | orchestrator | redis : Restart redis container ----------------------------------------- 4.67s 2025-06-05 19:38:49.810427 | orchestrator | redis : Copying over redis config files --------------------------------- 3.34s 2025-06-05 19:38:49.810435 | orchestrator | redis : Copying over default config.json files -------------------------- 3.16s 2025-06-05 19:38:49.810442 | orchestrator | redis : Check redis containers ------------------------------------------ 2.19s 2025-06-05 19:38:49.810450 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.26s 2025-06-05 19:38:49.810458 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.85s 2025-06-05 19:38:49.810466 | orchestrator | redis : include_tasks --------------------------------------------------- 0.74s 2025-06-05 19:38:49.810473 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-06-05 19:38:49.810481 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.39s 2025-06-05 19:38:49.810497 | orchestrator | 2025-06-05 19:38:49 | INFO  | Task 1807c762-8d28-4652-9f11-0e0bc2c889fe is in state SUCCESS 2025-06-05 19:38:49.810505 | orchestrator | 2025-06-05 19:38:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:52.838215 | orchestrator | 2025-06-05 19:38:52 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:52.838396 | orchestrator | 2025-06-05 19:38:52 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:52.839109 | orchestrator | 2025-06-05 19:38:52 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:52.839806 | orchestrator | 2025-06-05 19:38:52 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:52.840724 | orchestrator | 2025-06-05 19:38:52 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:52.840745 | orchestrator | 2025-06-05 19:38:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:55.878822 | orchestrator | 2025-06-05 19:38:55 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:55.879191 | orchestrator | 2025-06-05 19:38:55 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:55.881067 | orchestrator | 2025-06-05 19:38:55 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:55.884129 | orchestrator | 2025-06-05 19:38:55 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:55.884151 | orchestrator | 2025-06-05 19:38:55 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:55.884163 | orchestrator | 2025-06-05 19:38:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:38:58.912688 | orchestrator | 2025-06-05 19:38:58 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:38:58.913041 | orchestrator | 2025-06-05 19:38:58 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:38:58.916015 | orchestrator | 2025-06-05 19:38:58 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:38:58.916420 | orchestrator | 2025-06-05 19:38:58 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:38:58.917117 | orchestrator | 2025-06-05 19:38:58 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:38:58.917138 | orchestrator | 2025-06-05 19:38:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:01.945835 | orchestrator | 2025-06-05 19:39:01 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:01.945991 | orchestrator | 2025-06-05 19:39:01 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:01.946501 | orchestrator | 2025-06-05 19:39:01 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:01.947340 | orchestrator | 2025-06-05 19:39:01 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:01.949194 | orchestrator | 2025-06-05 19:39:01 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:01.949220 | orchestrator | 2025-06-05 19:39:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:04.980390 | orchestrator | 2025-06-05 19:39:04 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:04.980983 | orchestrator | 2025-06-05 19:39:04 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:04.981768 | orchestrator | 2025-06-05 19:39:04 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:04.982575 | orchestrator | 2025-06-05 19:39:04 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:04.983472 | orchestrator | 2025-06-05 19:39:04 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:04.983496 | orchestrator | 2025-06-05 19:39:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:08.028454 | orchestrator | 2025-06-05 19:39:08 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:08.029097 | orchestrator | 2025-06-05 19:39:08 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:08.029710 | orchestrator | 2025-06-05 19:39:08 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:08.030920 | orchestrator | 2025-06-05 19:39:08 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:08.030949 | orchestrator | 2025-06-05 19:39:08 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:08.030961 | orchestrator | 2025-06-05 19:39:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:11.063932 | orchestrator | 2025-06-05 19:39:11 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:11.064047 | orchestrator | 2025-06-05 19:39:11 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:11.066927 | orchestrator | 2025-06-05 19:39:11 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:11.066968 | orchestrator | 2025-06-05 19:39:11 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:11.071198 | orchestrator | 2025-06-05 19:39:11 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:11.071329 | orchestrator | 2025-06-05 19:39:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:14.133516 | orchestrator | 2025-06-05 19:39:14 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:14.133649 | orchestrator | 2025-06-05 19:39:14 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:14.133884 | orchestrator | 2025-06-05 19:39:14 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:14.134796 | orchestrator | 2025-06-05 19:39:14 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:14.135972 | orchestrator | 2025-06-05 19:39:14 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:14.136060 | orchestrator | 2025-06-05 19:39:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:17.179526 | orchestrator | 2025-06-05 19:39:17 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:17.181352 | orchestrator | 2025-06-05 19:39:17 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:17.184046 | orchestrator | 2025-06-05 19:39:17 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:17.184712 | orchestrator | 2025-06-05 19:39:17 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:17.186696 | orchestrator | 2025-06-05 19:39:17 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:17.186722 | orchestrator | 2025-06-05 19:39:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:20.220678 | orchestrator | 2025-06-05 19:39:20 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:20.221156 | orchestrator | 2025-06-05 19:39:20 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:20.221838 | orchestrator | 2025-06-05 19:39:20 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:20.225902 | orchestrator | 2025-06-05 19:39:20 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:20.227062 | orchestrator | 2025-06-05 19:39:20 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:20.227097 | orchestrator | 2025-06-05 19:39:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:23.260340 | orchestrator | 2025-06-05 19:39:23 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:23.260433 | orchestrator | 2025-06-05 19:39:23 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:23.260667 | orchestrator | 2025-06-05 19:39:23 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:23.262593 | orchestrator | 2025-06-05 19:39:23 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:23.263450 | orchestrator | 2025-06-05 19:39:23 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state STARTED 2025-06-05 19:39:23.263489 | orchestrator | 2025-06-05 19:39:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:26.293207 | orchestrator | 2025-06-05 19:39:26 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:26.293767 | orchestrator | 2025-06-05 19:39:26 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:26.298189 | orchestrator | 2025-06-05 19:39:26 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:26.300609 | orchestrator | 2025-06-05 19:39:26 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:26.302505 | orchestrator | 2025-06-05 19:39:26 | INFO  | Task 1f205339-6a5a-4d53-96d1-33f48c9442ea is in state SUCCESS 2025-06-05 19:39:26.308053 | orchestrator | 2025-06-05 19:39:26.308133 | orchestrator | 2025-06-05 19:39:26.308150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:39:26.308163 | orchestrator | 2025-06-05 19:39:26.308175 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:39:26.308186 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.336) 0:00:00.336 ********* 2025-06-05 19:39:26.308198 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:39:26.308210 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:39:26.308221 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:39:26.308232 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:39:26.308242 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:39:26.308290 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:39:26.308302 | orchestrator | 2025-06-05 19:39:26.308314 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:39:26.308325 | orchestrator | Thursday 05 June 2025 19:38:21 +0000 (0:00:00.739) 0:00:01.076 ********* 2025-06-05 19:39:26.308336 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308348 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308359 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308370 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308381 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308393 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-05 19:39:26.308430 | orchestrator | 2025-06-05 19:39:26.308442 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-05 19:39:26.308453 | orchestrator | 2025-06-05 19:39:26.308465 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-05 19:39:26.308475 | orchestrator | Thursday 05 June 2025 19:38:22 +0000 (0:00:00.941) 0:00:02.017 ********* 2025-06-05 19:39:26.308487 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:39:26.308500 | orchestrator | 2025-06-05 19:39:26.308510 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-05 19:39:26.308521 | orchestrator | Thursday 05 June 2025 19:38:24 +0000 (0:00:01.542) 0:00:03.560 ********* 2025-06-05 19:39:26.308532 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-05 19:39:26.308543 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-05 19:39:26.308554 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-05 19:39:26.308565 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-05 19:39:26.308576 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-05 19:39:26.308587 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-05 19:39:26.308600 | orchestrator | 2025-06-05 19:39:26.308612 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-05 19:39:26.308624 | orchestrator | Thursday 05 June 2025 19:38:25 +0000 (0:00:01.247) 0:00:04.807 ********* 2025-06-05 19:39:26.308637 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-05 19:39:26.308663 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-05 19:39:26.308676 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-05 19:39:26.308689 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-05 19:39:26.308701 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-05 19:39:26.308714 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-05 19:39:26.308726 | orchestrator | 2025-06-05 19:39:26.308736 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-05 19:39:26.308747 | orchestrator | Thursday 05 June 2025 19:38:27 +0000 (0:00:02.268) 0:00:07.076 ********* 2025-06-05 19:39:26.308758 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-05 19:39:26.308770 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:39:26.308781 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-05 19:39:26.308792 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:39:26.308803 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-05 19:39:26.308814 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:39:26.308825 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-05 19:39:26.308835 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:39:26.308846 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-05 19:39:26.308857 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:39:26.308868 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-05 19:39:26.308879 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:39:26.308889 | orchestrator | 2025-06-05 19:39:26.308900 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-05 19:39:26.308911 | orchestrator | Thursday 05 June 2025 19:38:29 +0000 (0:00:01.775) 0:00:08.851 ********* 2025-06-05 19:39:26.308922 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:39:26.308933 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:39:26.308944 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:39:26.308954 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:39:26.308965 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:39:26.308976 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:39:26.308995 | orchestrator | 2025-06-05 19:39:26.309006 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-05 19:39:26.309017 | orchestrator | Thursday 05 June 2025 19:38:30 +0000 (0:00:01.017) 0:00:09.869 ********* 2025-06-05 19:39:26.309051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309110 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309129 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309189 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309201 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309225 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309237 | orchestrator | 2025-06-05 19:39:26.309282 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-05 19:39:26.309295 | orchestrator | Thursday 05 June 2025 19:38:31 +0000 (0:00:01.530) 0:00:11.403 ********* 2025-06-05 19:39:26.309307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309348 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309409 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309425 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309436 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309461 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309473 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309484 | orchestrator | 2025-06-05 19:39:26.309496 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-05 19:39:26.309507 | orchestrator | Thursday 05 June 2025 19:38:36 +0000 (0:00:04.530) 0:00:15.933 ********* 2025-06-05 19:39:26.309518 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:39:26.309529 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:39:26.309540 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:39:26.309551 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:39:26.309562 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:39:26.309573 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:39:26.309584 | orchestrator | 2025-06-05 19:39:26.309595 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-05 19:39:26.309606 | orchestrator | Thursday 05 June 2025 19:38:38 +0000 (0:00:01.794) 0:00:17.728 ********* 2025-06-05 19:39:26.309618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309676 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309747 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309815 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309829 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309841 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-05 19:39:26.309852 | orchestrator | 2025-06-05 19:39:26.309863 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.309874 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:03.011) 0:00:20.740 ********* 2025-06-05 19:39:26.309885 | orchestrator | 2025-06-05 19:39:26.309897 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.309916 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:00.206) 0:00:20.946 ********* 2025-06-05 19:39:26.309927 | orchestrator | 2025-06-05 19:39:26.309938 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.309949 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:00.303) 0:00:21.249 ********* 2025-06-05 19:39:26.309960 | orchestrator | 2025-06-05 19:39:26.309977 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.309988 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:00.175) 0:00:21.425 ********* 2025-06-05 19:39:26.309999 | orchestrator | 2025-06-05 19:39:26.310010 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.310076 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.164) 0:00:21.590 ********* 2025-06-05 19:39:26.310089 | orchestrator | 2025-06-05 19:39:26.310100 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-05 19:39:26.310111 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.128) 0:00:21.718 ********* 2025-06-05 19:39:26.310122 | orchestrator | 2025-06-05 19:39:26.310132 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-05 19:39:26.310143 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.602) 0:00:22.321 ********* 2025-06-05 19:39:26.310154 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:39:26.310165 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:39:26.310176 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:39:26.310187 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:39:26.310198 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:39:26.310209 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:39:26.310220 | orchestrator | 2025-06-05 19:39:26.310230 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-05 19:39:26.310241 | orchestrator | Thursday 05 June 2025 19:38:52 +0000 (0:00:09.387) 0:00:31.709 ********* 2025-06-05 19:39:26.310278 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:39:26.310297 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:39:26.310317 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:39:26.310336 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:39:26.310351 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:39:26.310362 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:39:26.310373 | orchestrator | 2025-06-05 19:39:26.310392 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-05 19:39:26.310411 | orchestrator | Thursday 05 June 2025 19:38:53 +0000 (0:00:01.684) 0:00:33.393 ********* 2025-06-05 19:39:26.310429 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:39:26.310446 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:39:26.310464 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:39:26.310484 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:39:26.310496 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:39:26.310507 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:39:26.310517 | orchestrator | 2025-06-05 19:39:26.310528 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-05 19:39:26.310539 | orchestrator | Thursday 05 June 2025 19:39:03 +0000 (0:00:09.385) 0:00:42.779 ********* 2025-06-05 19:39:26.310560 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-05 19:39:26.310572 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-05 19:39:26.310583 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-05 19:39:26.310594 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-05 19:39:26.310605 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-05 19:39:26.310615 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-05 19:39:26.310635 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-05 19:39:26.310646 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-05 19:39:26.310657 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-05 19:39:26.310668 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-05 19:39:26.310679 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-05 19:39:26.310690 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-05 19:39:26.310700 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310711 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310722 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310733 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310743 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310754 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-05 19:39:26.310765 | orchestrator | 2025-06-05 19:39:26.310776 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-05 19:39:26.310791 | orchestrator | Thursday 05 June 2025 19:39:11 +0000 (0:00:07.983) 0:00:50.763 ********* 2025-06-05 19:39:26.310816 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-05 19:39:26.310833 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:39:26.310850 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-05 19:39:26.310867 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:39:26.310883 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-05 19:39:26.310899 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:39:26.310916 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-05 19:39:26.310933 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-05 19:39:26.310949 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-05 19:39:26.310968 | orchestrator | 2025-06-05 19:39:26.310987 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-05 19:39:26.311006 | orchestrator | Thursday 05 June 2025 19:39:13 +0000 (0:00:02.189) 0:00:52.952 ********* 2025-06-05 19:39:26.311019 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-05 19:39:26.311030 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:39:26.311040 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-05 19:39:26.311051 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:39:26.311062 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-05 19:39:26.311073 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:39:26.311083 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-05 19:39:26.311094 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-05 19:39:26.311105 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-05 19:39:26.311115 | orchestrator | 2025-06-05 19:39:26.311126 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-05 19:39:26.311137 | orchestrator | Thursday 05 June 2025 19:39:17 +0000 (0:00:03.775) 0:00:56.727 ********* 2025-06-05 19:39:26.311156 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:39:26.311166 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:39:26.311177 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:39:26.311188 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:39:26.311199 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:39:26.311209 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:39:26.311220 | orchestrator | 2025-06-05 19:39:26.311231 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:39:26.311242 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:39:26.311332 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:39:26.311345 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:39:26.311356 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:39:26.311367 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:39:26.311378 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:39:26.311389 | orchestrator | 2025-06-05 19:39:26.311399 | orchestrator | 2025-06-05 19:39:26.311410 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:39:26.311421 | orchestrator | Thursday 05 June 2025 19:39:25 +0000 (0:00:07.885) 0:01:04.613 ********* 2025-06-05 19:39:26.311432 | orchestrator | =============================================================================== 2025-06-05 19:39:26.311443 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 17.27s 2025-06-05 19:39:26.311453 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.39s 2025-06-05 19:39:26.311464 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.98s 2025-06-05 19:39:26.311475 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.53s 2025-06-05 19:39:26.311486 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.78s 2025-06-05 19:39:26.311496 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.01s 2025-06-05 19:39:26.311507 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.27s 2025-06-05 19:39:26.311518 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.19s 2025-06-05 19:39:26.311529 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.79s 2025-06-05 19:39:26.311539 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.78s 2025-06-05 19:39:26.311550 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.68s 2025-06-05 19:39:26.311561 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.58s 2025-06-05 19:39:26.311571 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.54s 2025-06-05 19:39:26.311582 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.53s 2025-06-05 19:39:26.311593 | orchestrator | module-load : Load modules ---------------------------------------------- 1.25s 2025-06-05 19:39:26.311603 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.02s 2025-06-05 19:39:26.311620 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.94s 2025-06-05 19:39:26.311631 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2025-06-05 19:39:26.311642 | orchestrator | 2025-06-05 19:39:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:29.343332 | orchestrator | 2025-06-05 19:39:29 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:29.346124 | orchestrator | 2025-06-05 19:39:29 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:29.352071 | orchestrator | 2025-06-05 19:39:29 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:29.352451 | orchestrator | 2025-06-05 19:39:29 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:29.354403 | orchestrator | 2025-06-05 19:39:29 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:29.354494 | orchestrator | 2025-06-05 19:39:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:32.386286 | orchestrator | 2025-06-05 19:39:32 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:32.386836 | orchestrator | 2025-06-05 19:39:32 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:32.388343 | orchestrator | 2025-06-05 19:39:32 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:32.389984 | orchestrator | 2025-06-05 19:39:32 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:32.391310 | orchestrator | 2025-06-05 19:39:32 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:32.391347 | orchestrator | 2025-06-05 19:39:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:35.426234 | orchestrator | 2025-06-05 19:39:35 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:35.426388 | orchestrator | 2025-06-05 19:39:35 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:35.426404 | orchestrator | 2025-06-05 19:39:35 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:35.426919 | orchestrator | 2025-06-05 19:39:35 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:35.427183 | orchestrator | 2025-06-05 19:39:35 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:35.427204 | orchestrator | 2025-06-05 19:39:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:38.464102 | orchestrator | 2025-06-05 19:39:38 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:38.464931 | orchestrator | 2025-06-05 19:39:38 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:38.466302 | orchestrator | 2025-06-05 19:39:38 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:38.466323 | orchestrator | 2025-06-05 19:39:38 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:38.467725 | orchestrator | 2025-06-05 19:39:38 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:38.467744 | orchestrator | 2025-06-05 19:39:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:41.502588 | orchestrator | 2025-06-05 19:39:41 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:41.505652 | orchestrator | 2025-06-05 19:39:41 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:41.507886 | orchestrator | 2025-06-05 19:39:41 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:41.509546 | orchestrator | 2025-06-05 19:39:41 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:41.511161 | orchestrator | 2025-06-05 19:39:41 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:41.511369 | orchestrator | 2025-06-05 19:39:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:44.591773 | orchestrator | 2025-06-05 19:39:44 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:44.592118 | orchestrator | 2025-06-05 19:39:44 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:44.594164 | orchestrator | 2025-06-05 19:39:44 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:44.594864 | orchestrator | 2025-06-05 19:39:44 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:44.596644 | orchestrator | 2025-06-05 19:39:44 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:44.596690 | orchestrator | 2025-06-05 19:39:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:47.645598 | orchestrator | 2025-06-05 19:39:47 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:47.646558 | orchestrator | 2025-06-05 19:39:47 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:47.647922 | orchestrator | 2025-06-05 19:39:47 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:47.651450 | orchestrator | 2025-06-05 19:39:47 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:47.653943 | orchestrator | 2025-06-05 19:39:47 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:47.654902 | orchestrator | 2025-06-05 19:39:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:50.684590 | orchestrator | 2025-06-05 19:39:50 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:50.684982 | orchestrator | 2025-06-05 19:39:50 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:50.685274 | orchestrator | 2025-06-05 19:39:50 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:50.686784 | orchestrator | 2025-06-05 19:39:50 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:50.687751 | orchestrator | 2025-06-05 19:39:50 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:50.687777 | orchestrator | 2025-06-05 19:39:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:53.727438 | orchestrator | 2025-06-05 19:39:53 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:53.728151 | orchestrator | 2025-06-05 19:39:53 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:53.728166 | orchestrator | 2025-06-05 19:39:53 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:53.729889 | orchestrator | 2025-06-05 19:39:53 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:53.730738 | orchestrator | 2025-06-05 19:39:53 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:53.730750 | orchestrator | 2025-06-05 19:39:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:56.778416 | orchestrator | 2025-06-05 19:39:56 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:56.778698 | orchestrator | 2025-06-05 19:39:56 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:56.779865 | orchestrator | 2025-06-05 19:39:56 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:56.780420 | orchestrator | 2025-06-05 19:39:56 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:56.784495 | orchestrator | 2025-06-05 19:39:56 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:56.784588 | orchestrator | 2025-06-05 19:39:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:39:59.826401 | orchestrator | 2025-06-05 19:39:59 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:39:59.827484 | orchestrator | 2025-06-05 19:39:59 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:39:59.829077 | orchestrator | 2025-06-05 19:39:59 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:39:59.831534 | orchestrator | 2025-06-05 19:39:59 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:39:59.831754 | orchestrator | 2025-06-05 19:39:59 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:39:59.832097 | orchestrator | 2025-06-05 19:39:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:02.879828 | orchestrator | 2025-06-05 19:40:02 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:02.880517 | orchestrator | 2025-06-05 19:40:02 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:02.881781 | orchestrator | 2025-06-05 19:40:02 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:02.882841 | orchestrator | 2025-06-05 19:40:02 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:40:02.883837 | orchestrator | 2025-06-05 19:40:02 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:02.883969 | orchestrator | 2025-06-05 19:40:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:05.914581 | orchestrator | 2025-06-05 19:40:05 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:05.914958 | orchestrator | 2025-06-05 19:40:05 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:05.915855 | orchestrator | 2025-06-05 19:40:05 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:05.917888 | orchestrator | 2025-06-05 19:40:05 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:40:05.918619 | orchestrator | 2025-06-05 19:40:05 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:05.918662 | orchestrator | 2025-06-05 19:40:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:08.952919 | orchestrator | 2025-06-05 19:40:08 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:08.953307 | orchestrator | 2025-06-05 19:40:08 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:08.954149 | orchestrator | 2025-06-05 19:40:08 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:08.954641 | orchestrator | 2025-06-05 19:40:08 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:40:08.955411 | orchestrator | 2025-06-05 19:40:08 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:08.955490 | orchestrator | 2025-06-05 19:40:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:11.982426 | orchestrator | 2025-06-05 19:40:11 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:11.985521 | orchestrator | 2025-06-05 19:40:11 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:11.986279 | orchestrator | 2025-06-05 19:40:11 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:11.987088 | orchestrator | 2025-06-05 19:40:11 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state STARTED 2025-06-05 19:40:11.987602 | orchestrator | 2025-06-05 19:40:11 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:11.987614 | orchestrator | 2025-06-05 19:40:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:15.032958 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task df1ae73d-ae4b-4085-a29a-98b13fdc0e01 is in state STARTED 2025-06-05 19:40:15.037233 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:15.037360 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:15.039839 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:15.041172 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task 67e736d4-8816-466f-a297-84d616386075 is in state STARTED 2025-06-05 19:40:15.045072 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task 3fa8b67d-efc0-436f-8104-ac32a90af70f is in state SUCCESS 2025-06-05 19:40:15.046729 | orchestrator | 2025-06-05 19:40:15.046756 | orchestrator | 2025-06-05 19:40:15.046767 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-05 19:40:15.046779 | orchestrator | 2025-06-05 19:40:15.046790 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-05 19:40:15.046801 | orchestrator | Thursday 05 June 2025 19:35:49 +0000 (0:00:00.147) 0:00:00.147 ********* 2025-06-05 19:40:15.046812 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.046825 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.046836 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.046847 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.046858 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.046868 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.046879 | orchestrator | 2025-06-05 19:40:15.046890 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-05 19:40:15.046901 | orchestrator | Thursday 05 June 2025 19:35:49 +0000 (0:00:00.620) 0:00:00.768 ********* 2025-06-05 19:40:15.046912 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.046924 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.046935 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.046946 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.046956 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.046967 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.046978 | orchestrator | 2025-06-05 19:40:15.046989 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-05 19:40:15.047014 | orchestrator | Thursday 05 June 2025 19:35:50 +0000 (0:00:00.651) 0:00:01.419 ********* 2025-06-05 19:40:15.047026 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.047037 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.047047 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.047058 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.047069 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.047080 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.047091 | orchestrator | 2025-06-05 19:40:15.047102 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-05 19:40:15.047113 | orchestrator | Thursday 05 June 2025 19:35:51 +0000 (0:00:00.842) 0:00:02.261 ********* 2025-06-05 19:40:15.047123 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.047154 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.047165 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.047176 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.047187 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.047198 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.047209 | orchestrator | 2025-06-05 19:40:15.047220 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-05 19:40:15.047231 | orchestrator | Thursday 05 June 2025 19:35:53 +0000 (0:00:01.915) 0:00:04.177 ********* 2025-06-05 19:40:15.047285 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.047297 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.047308 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.047318 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.047329 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.047340 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.047353 | orchestrator | 2025-06-05 19:40:15.047366 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-05 19:40:15.047379 | orchestrator | Thursday 05 June 2025 19:35:54 +0000 (0:00:01.344) 0:00:05.521 ********* 2025-06-05 19:40:15.047391 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.047404 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.047417 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.047430 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.047443 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.047455 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.047468 | orchestrator | 2025-06-05 19:40:15.047480 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-05 19:40:15.047494 | orchestrator | Thursday 05 June 2025 19:35:55 +0000 (0:00:01.000) 0:00:06.522 ********* 2025-06-05 19:40:15.047506 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.047518 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.047531 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.047543 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.047556 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.047568 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.047582 | orchestrator | 2025-06-05 19:40:15.047595 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-05 19:40:15.047608 | orchestrator | Thursday 05 June 2025 19:35:56 +0000 (0:00:00.695) 0:00:07.217 ********* 2025-06-05 19:40:15.047620 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.047631 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.047642 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.047653 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.047664 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.047674 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.047685 | orchestrator | 2025-06-05 19:40:15.047696 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-05 19:40:15.047707 | orchestrator | Thursday 05 June 2025 19:35:57 +0000 (0:00:00.854) 0:00:08.071 ********* 2025-06-05 19:40:15.047718 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047729 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047740 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.047751 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047762 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047773 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.047784 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047795 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047806 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.047817 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047847 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047859 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.047870 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047882 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047893 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.047904 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 19:40:15.047914 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 19:40:15.047925 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.047936 | orchestrator | 2025-06-05 19:40:15.047947 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-05 19:40:15.047958 | orchestrator | Thursday 05 June 2025 19:35:58 +0000 (0:00:00.976) 0:00:09.048 ********* 2025-06-05 19:40:15.047969 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.047980 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.047990 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048001 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048012 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048023 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048033 | orchestrator | 2025-06-05 19:40:15.048050 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-05 19:40:15.048062 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:01.354) 0:00:10.402 ********* 2025-06-05 19:40:15.048073 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.048083 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.048094 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.048105 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.048116 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.048127 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.048138 | orchestrator | 2025-06-05 19:40:15.048148 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-05 19:40:15.048159 | orchestrator | Thursday 05 June 2025 19:36:00 +0000 (0:00:00.763) 0:00:11.166 ********* 2025-06-05 19:40:15.048170 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.048181 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.048192 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.048203 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.048214 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.048224 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.048235 | orchestrator | 2025-06-05 19:40:15.048265 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-05 19:40:15.048276 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:05.713) 0:00:16.880 ********* 2025-06-05 19:40:15.048287 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.048298 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.048308 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048319 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048330 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048341 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048352 | orchestrator | 2025-06-05 19:40:15.048363 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-05 19:40:15.048373 | orchestrator | Thursday 05 June 2025 19:36:07 +0000 (0:00:01.042) 0:00:17.922 ********* 2025-06-05 19:40:15.048384 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.048395 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.048406 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048417 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048427 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048438 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048455 | orchestrator | 2025-06-05 19:40:15.048466 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-05 19:40:15.048478 | orchestrator | Thursday 05 June 2025 19:36:09 +0000 (0:00:01.961) 0:00:19.884 ********* 2025-06-05 19:40:15.048489 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.048500 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.048511 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048521 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048532 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048543 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048554 | orchestrator | 2025-06-05 19:40:15.048565 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-05 19:40:15.048576 | orchestrator | Thursday 05 June 2025 19:36:09 +0000 (0:00:00.912) 0:00:20.797 ********* 2025-06-05 19:40:15.048587 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-05 19:40:15.048598 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-05 19:40:15.048608 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.048619 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-05 19:40:15.048630 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-05 19:40:15.048641 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.048652 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-05 19:40:15.048663 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-05 19:40:15.048673 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048684 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-05 19:40:15.048695 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-05 19:40:15.048706 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048716 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-05 19:40:15.048727 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-05 19:40:15.048738 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048749 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-05 19:40:15.048760 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-05 19:40:15.048771 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048781 | orchestrator | 2025-06-05 19:40:15.048793 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-05 19:40:15.048810 | orchestrator | Thursday 05 June 2025 19:36:11 +0000 (0:00:01.095) 0:00:21.892 ********* 2025-06-05 19:40:15.048821 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.048832 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.048842 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.048853 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.048864 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.048875 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.048885 | orchestrator | 2025-06-05 19:40:15.048896 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-05 19:40:15.048907 | orchestrator | 2025-06-05 19:40:15.048918 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-05 19:40:15.048929 | orchestrator | Thursday 05 June 2025 19:36:12 +0000 (0:00:01.673) 0:00:23.566 ********* 2025-06-05 19:40:15.048940 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.048951 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.048962 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.048972 | orchestrator | 2025-06-05 19:40:15.048983 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-05 19:40:15.048994 | orchestrator | Thursday 05 June 2025 19:36:14 +0000 (0:00:01.676) 0:00:25.242 ********* 2025-06-05 19:40:15.049005 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.049016 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.049027 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.049043 | orchestrator | 2025-06-05 19:40:15.049060 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-05 19:40:15.049071 | orchestrator | Thursday 05 June 2025 19:36:16 +0000 (0:00:01.622) 0:00:26.864 ********* 2025-06-05 19:40:15.049082 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.049092 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.049103 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.049114 | orchestrator | 2025-06-05 19:40:15.049125 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-05 19:40:15.049136 | orchestrator | Thursday 05 June 2025 19:36:17 +0000 (0:00:01.166) 0:00:28.031 ********* 2025-06-05 19:40:15.049147 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.049157 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.049168 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.049179 | orchestrator | 2025-06-05 19:40:15.049190 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-05 19:40:15.049200 | orchestrator | Thursday 05 June 2025 19:36:17 +0000 (0:00:00.777) 0:00:28.809 ********* 2025-06-05 19:40:15.049211 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.049222 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049233 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049261 | orchestrator | 2025-06-05 19:40:15.049273 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-05 19:40:15.049283 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.269) 0:00:29.078 ********* 2025-06-05 19:40:15.049294 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:40:15.049306 | orchestrator | 2025-06-05 19:40:15.049317 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-05 19:40:15.049328 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.696) 0:00:29.775 ********* 2025-06-05 19:40:15.049339 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.049350 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.049361 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.049372 | orchestrator | 2025-06-05 19:40:15.049383 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-05 19:40:15.049394 | orchestrator | Thursday 05 June 2025 19:36:22 +0000 (0:00:03.182) 0:00:32.957 ********* 2025-06-05 19:40:15.049405 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049416 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049427 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.049438 | orchestrator | 2025-06-05 19:40:15.049449 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-05 19:40:15.049460 | orchestrator | Thursday 05 June 2025 19:36:23 +0000 (0:00:00.961) 0:00:33.919 ********* 2025-06-05 19:40:15.049471 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049481 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049492 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.049503 | orchestrator | 2025-06-05 19:40:15.049514 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-05 19:40:15.049525 | orchestrator | Thursday 05 June 2025 19:36:24 +0000 (0:00:00.962) 0:00:34.881 ********* 2025-06-05 19:40:15.049536 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049547 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049558 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.049569 | orchestrator | 2025-06-05 19:40:15.049580 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-05 19:40:15.049591 | orchestrator | Thursday 05 June 2025 19:36:26 +0000 (0:00:02.347) 0:00:37.228 ********* 2025-06-05 19:40:15.049602 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.049613 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049624 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049635 | orchestrator | 2025-06-05 19:40:15.049646 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-05 19:40:15.049664 | orchestrator | Thursday 05 June 2025 19:36:26 +0000 (0:00:00.361) 0:00:37.590 ********* 2025-06-05 19:40:15.049675 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.049686 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.049697 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.049707 | orchestrator | 2025-06-05 19:40:15.049718 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-05 19:40:15.049729 | orchestrator | Thursday 05 June 2025 19:36:27 +0000 (0:00:00.362) 0:00:37.953 ********* 2025-06-05 19:40:15.049740 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.049751 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.049761 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.049772 | orchestrator | 2025-06-05 19:40:15.049803 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-05 19:40:15.049814 | orchestrator | Thursday 05 June 2025 19:36:28 +0000 (0:00:01.750) 0:00:39.704 ********* 2025-06-05 19:40:15.049831 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-05 19:40:15.049843 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-05 19:40:15.049854 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-05 19:40:15.049865 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-05 19:40:15.049876 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-05 19:40:15.049887 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-05 19:40:15.049914 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-05 19:40:15.049926 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-05 19:40:15.049937 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-05 19:40:15.049947 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-05 19:40:15.049958 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-05 19:40:15.049969 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-05 19:40:15.049980 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-05 19:40:15.049991 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-05 19:40:15.050002 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-05 19:40:15.050012 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.050141 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.050154 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.050165 | orchestrator | 2025-06-05 19:40:15.050176 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-05 19:40:15.050187 | orchestrator | Thursday 05 June 2025 19:37:24 +0000 (0:00:55.416) 0:01:35.120 ********* 2025-06-05 19:40:15.050198 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.050218 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.050228 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.050283 | orchestrator | 2025-06-05 19:40:15.050295 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-05 19:40:15.050306 | orchestrator | Thursday 05 June 2025 19:37:24 +0000 (0:00:00.293) 0:01:35.413 ********* 2025-06-05 19:40:15.050317 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050328 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050339 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050350 | orchestrator | 2025-06-05 19:40:15.050361 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-05 19:40:15.050372 | orchestrator | Thursday 05 June 2025 19:37:25 +0000 (0:00:01.039) 0:01:36.453 ********* 2025-06-05 19:40:15.050383 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050394 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050404 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050415 | orchestrator | 2025-06-05 19:40:15.050426 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-05 19:40:15.050437 | orchestrator | Thursday 05 June 2025 19:37:26 +0000 (0:00:01.217) 0:01:37.670 ********* 2025-06-05 19:40:15.050448 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050470 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050482 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050492 | orchestrator | 2025-06-05 19:40:15.050503 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-05 19:40:15.050514 | orchestrator | Thursday 05 June 2025 19:37:41 +0000 (0:00:14.221) 0:01:51.892 ********* 2025-06-05 19:40:15.050525 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.050536 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.050547 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.050558 | orchestrator | 2025-06-05 19:40:15.050569 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-05 19:40:15.050580 | orchestrator | Thursday 05 June 2025 19:37:41 +0000 (0:00:00.711) 0:01:52.603 ********* 2025-06-05 19:40:15.050591 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.050602 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.050612 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.050623 | orchestrator | 2025-06-05 19:40:15.050634 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-05 19:40:15.050645 | orchestrator | Thursday 05 June 2025 19:37:42 +0000 (0:00:00.620) 0:01:53.223 ********* 2025-06-05 19:40:15.050656 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050667 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050678 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050689 | orchestrator | 2025-06-05 19:40:15.050709 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-05 19:40:15.050720 | orchestrator | Thursday 05 June 2025 19:37:43 +0000 (0:00:00.721) 0:01:53.945 ********* 2025-06-05 19:40:15.050731 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.050742 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.050752 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.050763 | orchestrator | 2025-06-05 19:40:15.050774 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-05 19:40:15.050785 | orchestrator | Thursday 05 June 2025 19:37:44 +0000 (0:00:00.981) 0:01:54.926 ********* 2025-06-05 19:40:15.050796 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.050807 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.050817 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.050828 | orchestrator | 2025-06-05 19:40:15.050839 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-05 19:40:15.050850 | orchestrator | Thursday 05 June 2025 19:37:44 +0000 (0:00:00.276) 0:01:55.203 ********* 2025-06-05 19:40:15.050861 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050872 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050883 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050901 | orchestrator | 2025-06-05 19:40:15.050912 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-05 19:40:15.050929 | orchestrator | Thursday 05 June 2025 19:37:45 +0000 (0:00:00.684) 0:01:55.887 ********* 2025-06-05 19:40:15.050940 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.050951 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.050962 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.050973 | orchestrator | 2025-06-05 19:40:15.050984 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-05 19:40:15.050995 | orchestrator | Thursday 05 June 2025 19:37:45 +0000 (0:00:00.615) 0:01:56.503 ********* 2025-06-05 19:40:15.051006 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.051017 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.051028 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.051039 | orchestrator | 2025-06-05 19:40:15.051050 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-05 19:40:15.051061 | orchestrator | Thursday 05 June 2025 19:37:46 +0000 (0:00:01.177) 0:01:57.680 ********* 2025-06-05 19:40:15.051072 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:40:15.051082 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:40:15.051093 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:40:15.051104 | orchestrator | 2025-06-05 19:40:15.051115 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-05 19:40:15.051126 | orchestrator | Thursday 05 June 2025 19:37:47 +0000 (0:00:00.855) 0:01:58.535 ********* 2025-06-05 19:40:15.051137 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.051148 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.051159 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.051170 | orchestrator | 2025-06-05 19:40:15.051181 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-05 19:40:15.051192 | orchestrator | Thursday 05 June 2025 19:37:48 +0000 (0:00:00.339) 0:01:58.875 ********* 2025-06-05 19:40:15.051203 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.051213 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.051224 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.051235 | orchestrator | 2025-06-05 19:40:15.051261 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-05 19:40:15.051272 | orchestrator | Thursday 05 June 2025 19:37:48 +0000 (0:00:00.328) 0:01:59.204 ********* 2025-06-05 19:40:15.051283 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.051297 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.051315 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.051334 | orchestrator | 2025-06-05 19:40:15.051354 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-05 19:40:15.051373 | orchestrator | Thursday 05 June 2025 19:37:49 +0000 (0:00:01.279) 0:02:00.483 ********* 2025-06-05 19:40:15.051386 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.051397 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.051408 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.051419 | orchestrator | 2025-06-05 19:40:15.051430 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-05 19:40:15.051441 | orchestrator | Thursday 05 June 2025 19:37:50 +0000 (0:00:00.611) 0:02:01.095 ********* 2025-06-05 19:40:15.051452 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-05 19:40:15.051463 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-05 19:40:15.051474 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-05 19:40:15.051484 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-05 19:40:15.051495 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-05 19:40:15.051506 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-05 19:40:15.051524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-05 19:40:15.051535 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-05 19:40:15.051545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-05 19:40:15.051556 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-05 19:40:15.051567 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-05 19:40:15.051578 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-05 19:40:15.051595 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-05 19:40:15.051607 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-05 19:40:15.051617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-05 19:40:15.051628 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-05 19:40:15.051639 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-05 19:40:15.051650 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-05 19:40:15.051660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-05 19:40:15.051671 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-05 19:40:15.051682 | orchestrator | 2025-06-05 19:40:15.051692 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-05 19:40:15.051703 | orchestrator | 2025-06-05 19:40:15.051714 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-05 19:40:15.051725 | orchestrator | Thursday 05 June 2025 19:37:53 +0000 (0:00:03.076) 0:02:04.172 ********* 2025-06-05 19:40:15.051736 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.051747 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.051757 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.051768 | orchestrator | 2025-06-05 19:40:15.051779 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-05 19:40:15.051789 | orchestrator | Thursday 05 June 2025 19:37:53 +0000 (0:00:00.512) 0:02:04.685 ********* 2025-06-05 19:40:15.051800 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.051811 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.051821 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.051832 | orchestrator | 2025-06-05 19:40:15.051843 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-05 19:40:15.051854 | orchestrator | Thursday 05 June 2025 19:37:54 +0000 (0:00:00.607) 0:02:05.292 ********* 2025-06-05 19:40:15.051865 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.051875 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.051886 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.051896 | orchestrator | 2025-06-05 19:40:15.051907 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-05 19:40:15.051918 | orchestrator | Thursday 05 June 2025 19:37:54 +0000 (0:00:00.273) 0:02:05.566 ********* 2025-06-05 19:40:15.051929 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:40:15.051940 | orchestrator | 2025-06-05 19:40:15.051951 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-05 19:40:15.051962 | orchestrator | Thursday 05 June 2025 19:37:55 +0000 (0:00:00.643) 0:02:06.209 ********* 2025-06-05 19:40:15.051973 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.051984 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.052001 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.052012 | orchestrator | 2025-06-05 19:40:15.052023 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-05 19:40:15.052034 | orchestrator | Thursday 05 June 2025 19:37:55 +0000 (0:00:00.287) 0:02:06.497 ********* 2025-06-05 19:40:15.052045 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.052055 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.052066 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.052077 | orchestrator | 2025-06-05 19:40:15.052088 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-05 19:40:15.052098 | orchestrator | Thursday 05 June 2025 19:37:55 +0000 (0:00:00.317) 0:02:06.814 ********* 2025-06-05 19:40:15.052109 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.052120 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.052131 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.052141 | orchestrator | 2025-06-05 19:40:15.052764 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-05 19:40:15.052783 | orchestrator | Thursday 05 June 2025 19:37:56 +0000 (0:00:00.277) 0:02:07.091 ********* 2025-06-05 19:40:15.052794 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.052805 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.052816 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.052826 | orchestrator | 2025-06-05 19:40:15.052838 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-05 19:40:15.052848 | orchestrator | Thursday 05 June 2025 19:37:57 +0000 (0:00:01.447) 0:02:08.539 ********* 2025-06-05 19:40:15.052859 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:40:15.052870 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:40:15.052881 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:40:15.052892 | orchestrator | 2025-06-05 19:40:15.052902 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-05 19:40:15.052913 | orchestrator | 2025-06-05 19:40:15.052924 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-05 19:40:15.052935 | orchestrator | Thursday 05 June 2025 19:38:06 +0000 (0:00:08.645) 0:02:17.184 ********* 2025-06-05 19:40:15.052946 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.052957 | orchestrator | 2025-06-05 19:40:15.052967 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-05 19:40:15.052978 | orchestrator | Thursday 05 June 2025 19:38:07 +0000 (0:00:00.669) 0:02:17.854 ********* 2025-06-05 19:40:15.052989 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053000 | orchestrator | 2025-06-05 19:40:15.053011 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-05 19:40:15.053022 | orchestrator | Thursday 05 June 2025 19:38:07 +0000 (0:00:00.372) 0:02:18.226 ********* 2025-06-05 19:40:15.053032 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-05 19:40:15.053043 | orchestrator | 2025-06-05 19:40:15.053062 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-05 19:40:15.053073 | orchestrator | Thursday 05 June 2025 19:38:08 +0000 (0:00:00.764) 0:02:18.991 ********* 2025-06-05 19:40:15.053084 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053095 | orchestrator | 2025-06-05 19:40:15.053106 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-05 19:40:15.053117 | orchestrator | Thursday 05 June 2025 19:38:08 +0000 (0:00:00.710) 0:02:19.702 ********* 2025-06-05 19:40:15.053128 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053139 | orchestrator | 2025-06-05 19:40:15.053155 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-05 19:40:15.053166 | orchestrator | Thursday 05 June 2025 19:38:09 +0000 (0:00:00.507) 0:02:20.209 ********* 2025-06-05 19:40:15.053177 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-05 19:40:15.053188 | orchestrator | 2025-06-05 19:40:15.053199 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-05 19:40:15.053217 | orchestrator | Thursday 05 June 2025 19:38:10 +0000 (0:00:01.559) 0:02:21.769 ********* 2025-06-05 19:40:15.053228 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-05 19:40:15.053291 | orchestrator | 2025-06-05 19:40:15.053304 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-05 19:40:15.053315 | orchestrator | Thursday 05 June 2025 19:38:11 +0000 (0:00:00.907) 0:02:22.676 ********* 2025-06-05 19:40:15.053326 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053336 | orchestrator | 2025-06-05 19:40:15.053347 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-05 19:40:15.053358 | orchestrator | Thursday 05 June 2025 19:38:12 +0000 (0:00:00.469) 0:02:23.146 ********* 2025-06-05 19:40:15.053368 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053379 | orchestrator | 2025-06-05 19:40:15.053390 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-05 19:40:15.053400 | orchestrator | 2025-06-05 19:40:15.053411 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-05 19:40:15.053422 | orchestrator | Thursday 05 June 2025 19:38:12 +0000 (0:00:00.452) 0:02:23.598 ********* 2025-06-05 19:40:15.053432 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.053443 | orchestrator | 2025-06-05 19:40:15.053454 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-05 19:40:15.053464 | orchestrator | Thursday 05 June 2025 19:38:12 +0000 (0:00:00.149) 0:02:23.748 ********* 2025-06-05 19:40:15.053476 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-05 19:40:15.053487 | orchestrator | 2025-06-05 19:40:15.053498 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-05 19:40:15.053508 | orchestrator | Thursday 05 June 2025 19:38:13 +0000 (0:00:00.522) 0:02:24.270 ********* 2025-06-05 19:40:15.053519 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.053530 | orchestrator | 2025-06-05 19:40:15.053541 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-05 19:40:15.053552 | orchestrator | Thursday 05 June 2025 19:38:14 +0000 (0:00:00.810) 0:02:25.080 ********* 2025-06-05 19:40:15.053563 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.053574 | orchestrator | 2025-06-05 19:40:15.053585 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-05 19:40:15.053596 | orchestrator | Thursday 05 June 2025 19:38:16 +0000 (0:00:01.823) 0:02:26.904 ********* 2025-06-05 19:40:15.053606 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053617 | orchestrator | 2025-06-05 19:40:15.053628 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-05 19:40:15.053639 | orchestrator | Thursday 05 June 2025 19:38:16 +0000 (0:00:00.809) 0:02:27.713 ********* 2025-06-05 19:40:15.053650 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.053660 | orchestrator | 2025-06-05 19:40:15.053671 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-05 19:40:15.053682 | orchestrator | Thursday 05 June 2025 19:38:17 +0000 (0:00:00.477) 0:02:28.191 ********* 2025-06-05 19:40:15.053693 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053704 | orchestrator | 2025-06-05 19:40:15.053715 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-05 19:40:15.053726 | orchestrator | Thursday 05 June 2025 19:38:23 +0000 (0:00:06.042) 0:02:34.234 ********* 2025-06-05 19:40:15.053737 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.053747 | orchestrator | 2025-06-05 19:40:15.053758 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-05 19:40:15.053769 | orchestrator | Thursday 05 June 2025 19:38:34 +0000 (0:00:10.616) 0:02:44.850 ********* 2025-06-05 19:40:15.053780 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.053791 | orchestrator | 2025-06-05 19:40:15.053802 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-05 19:40:15.053813 | orchestrator | 2025-06-05 19:40:15.053823 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-05 19:40:15.053839 | orchestrator | Thursday 05 June 2025 19:38:34 +0000 (0:00:00.441) 0:02:45.292 ********* 2025-06-05 19:40:15.053849 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.053859 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.053869 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.053878 | orchestrator | 2025-06-05 19:40:15.053888 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-05 19:40:15.053897 | orchestrator | Thursday 05 June 2025 19:38:35 +0000 (0:00:00.540) 0:02:45.832 ********* 2025-06-05 19:40:15.053907 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.053917 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.053926 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.053936 | orchestrator | 2025-06-05 19:40:15.053946 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-05 19:40:15.053955 | orchestrator | Thursday 05 June 2025 19:38:35 +0000 (0:00:00.319) 0:02:46.151 ********* 2025-06-05 19:40:15.053965 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:40:15.053975 | orchestrator | 2025-06-05 19:40:15.053985 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-05 19:40:15.054000 | orchestrator | Thursday 05 June 2025 19:38:35 +0000 (0:00:00.486) 0:02:46.638 ********* 2025-06-05 19:40:15.054010 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054061 | orchestrator | 2025-06-05 19:40:15.054072 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-05 19:40:15.054082 | orchestrator | Thursday 05 June 2025 19:38:37 +0000 (0:00:01.343) 0:02:47.981 ********* 2025-06-05 19:40:15.054092 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054101 | orchestrator | 2025-06-05 19:40:15.054115 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-05 19:40:15.054125 | orchestrator | Thursday 05 June 2025 19:38:38 +0000 (0:00:00.934) 0:02:48.915 ********* 2025-06-05 19:40:15.054135 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054144 | orchestrator | 2025-06-05 19:40:15.054154 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-05 19:40:15.054164 | orchestrator | Thursday 05 June 2025 19:38:38 +0000 (0:00:00.258) 0:02:49.173 ********* 2025-06-05 19:40:15.054173 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054183 | orchestrator | 2025-06-05 19:40:15.054192 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-05 19:40:15.054202 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:00.935) 0:02:50.109 ********* 2025-06-05 19:40:15.054211 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054221 | orchestrator | 2025-06-05 19:40:15.054231 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-05 19:40:15.054253 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:00.189) 0:02:50.298 ********* 2025-06-05 19:40:15.054263 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054273 | orchestrator | 2025-06-05 19:40:15.054283 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-05 19:40:15.054293 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:00.167) 0:02:50.466 ********* 2025-06-05 19:40:15.054303 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054312 | orchestrator | 2025-06-05 19:40:15.054322 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-05 19:40:15.054332 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:00.151) 0:02:50.617 ********* 2025-06-05 19:40:15.054342 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054351 | orchestrator | 2025-06-05 19:40:15.054361 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-05 19:40:15.054371 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:00.154) 0:02:50.772 ********* 2025-06-05 19:40:15.054380 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054397 | orchestrator | 2025-06-05 19:40:15.054407 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-05 19:40:15.054417 | orchestrator | Thursday 05 June 2025 19:38:43 +0000 (0:00:04.009) 0:02:54.781 ********* 2025-06-05 19:40:15.054427 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-05 19:40:15.054436 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-05 19:40:15.054446 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-05 19:40:15.054456 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-05 19:40:15.054466 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-05 19:40:15.054475 | orchestrator | 2025-06-05 19:40:15.054485 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-05 19:40:15.054495 | orchestrator | Thursday 05 June 2025 19:39:45 +0000 (0:01:01.229) 0:03:56.011 ********* 2025-06-05 19:40:15.054505 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054514 | orchestrator | 2025-06-05 19:40:15.054524 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-05 19:40:15.054534 | orchestrator | Thursday 05 June 2025 19:39:46 +0000 (0:00:01.172) 0:03:57.183 ********* 2025-06-05 19:40:15.054544 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054554 | orchestrator | 2025-06-05 19:40:15.054563 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-05 19:40:15.054573 | orchestrator | Thursday 05 June 2025 19:39:47 +0000 (0:00:01.197) 0:03:58.381 ********* 2025-06-05 19:40:15.054583 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-05 19:40:15.054593 | orchestrator | 2025-06-05 19:40:15.054602 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-05 19:40:15.054612 | orchestrator | Thursday 05 June 2025 19:39:48 +0000 (0:00:01.265) 0:03:59.646 ********* 2025-06-05 19:40:15.054622 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054631 | orchestrator | 2025-06-05 19:40:15.054641 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-05 19:40:15.054651 | orchestrator | Thursday 05 June 2025 19:39:48 +0000 (0:00:00.183) 0:03:59.830 ********* 2025-06-05 19:40:15.054661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-05 19:40:15.054671 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-05 19:40:15.054681 | orchestrator | 2025-06-05 19:40:15.054705 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-05 19:40:15.054715 | orchestrator | Thursday 05 June 2025 19:39:51 +0000 (0:00:02.033) 0:04:01.863 ********* 2025-06-05 19:40:15.054725 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.054735 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.054744 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.054754 | orchestrator | 2025-06-05 19:40:15.054764 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-05 19:40:15.054774 | orchestrator | Thursday 05 June 2025 19:39:51 +0000 (0:00:00.258) 0:04:02.122 ********* 2025-06-05 19:40:15.054783 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.054793 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.054803 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.054812 | orchestrator | 2025-06-05 19:40:15.054828 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-05 19:40:15.054838 | orchestrator | 2025-06-05 19:40:15.054848 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-05 19:40:15.054858 | orchestrator | Thursday 05 June 2025 19:39:52 +0000 (0:00:00.805) 0:04:02.927 ********* 2025-06-05 19:40:15.054867 | orchestrator | ok: [testbed-manager] 2025-06-05 19:40:15.054877 | orchestrator | 2025-06-05 19:40:15.054887 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-05 19:40:15.054906 | orchestrator | Thursday 05 June 2025 19:39:52 +0000 (0:00:00.278) 0:04:03.205 ********* 2025-06-05 19:40:15.054916 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-05 19:40:15.054926 | orchestrator | 2025-06-05 19:40:15.054936 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-05 19:40:15.054945 | orchestrator | Thursday 05 June 2025 19:39:52 +0000 (0:00:00.268) 0:04:03.474 ********* 2025-06-05 19:40:15.054955 | orchestrator | changed: [testbed-manager] 2025-06-05 19:40:15.054965 | orchestrator | 2025-06-05 19:40:15.054975 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-05 19:40:15.054984 | orchestrator | 2025-06-05 19:40:15.054994 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-05 19:40:15.055004 | orchestrator | Thursday 05 June 2025 19:39:58 +0000 (0:00:06.327) 0:04:09.802 ********* 2025-06-05 19:40:15.055014 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:40:15.055023 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:40:15.055033 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:40:15.055043 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:40:15.055052 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:40:15.055062 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:40:15.055072 | orchestrator | 2025-06-05 19:40:15.055082 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-05 19:40:15.055091 | orchestrator | Thursday 05 June 2025 19:39:59 +0000 (0:00:01.014) 0:04:10.816 ********* 2025-06-05 19:40:15.055101 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-05 19:40:15.055111 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-05 19:40:15.055121 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-05 19:40:15.055130 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-05 19:40:15.055140 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-05 19:40:15.055150 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-05 19:40:15.055170 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-05 19:40:15.055180 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-05 19:40:15.055190 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-05 19:40:15.055200 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-05 19:40:15.055209 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-05 19:40:15.055219 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-05 19:40:15.055229 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-05 19:40:15.055251 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-05 19:40:15.055261 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-05 19:40:15.055271 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-05 19:40:15.055280 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-05 19:40:15.055290 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-05 19:40:15.055299 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-05 19:40:15.055309 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-05 19:40:15.055319 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-05 19:40:15.055334 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-05 19:40:15.055344 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-05 19:40:15.055354 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-05 19:40:15.055364 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-05 19:40:15.055373 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-05 19:40:15.055383 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-05 19:40:15.055393 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-05 19:40:15.055402 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-05 19:40:15.055412 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-05 19:40:15.055422 | orchestrator | 2025-06-05 19:40:15.055436 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-05 19:40:15.055446 | orchestrator | Thursday 05 June 2025 19:40:11 +0000 (0:00:11.551) 0:04:22.367 ********* 2025-06-05 19:40:15.055456 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.055466 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.055475 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.055485 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.055495 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.055504 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.055514 | orchestrator | 2025-06-05 19:40:15.055529 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-05 19:40:15.055539 | orchestrator | Thursday 05 June 2025 19:40:12 +0000 (0:00:00.573) 0:04:22.941 ********* 2025-06-05 19:40:15.055548 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:40:15.055558 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:40:15.055567 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:40:15.055577 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:40:15.055587 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:40:15.055596 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:40:15.055606 | orchestrator | 2025-06-05 19:40:15.055616 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:40:15.055626 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:40:15.055637 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-05 19:40:15.055647 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-05 19:40:15.055657 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-05 19:40:15.055667 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-05 19:40:15.055676 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-05 19:40:15.055686 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-05 19:40:15.055696 | orchestrator | 2025-06-05 19:40:15.055706 | orchestrator | 2025-06-05 19:40:15.055716 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:40:15.055725 | orchestrator | Thursday 05 June 2025 19:40:12 +0000 (0:00:00.381) 0:04:23.322 ********* 2025-06-05 19:40:15.055743 | orchestrator | =============================================================================== 2025-06-05 19:40:15.055753 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 61.23s 2025-06-05 19:40:15.055762 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.42s 2025-06-05 19:40:15.055772 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.22s 2025-06-05 19:40:15.055782 | orchestrator | Manage labels ---------------------------------------------------------- 11.55s 2025-06-05 19:40:15.055792 | orchestrator | kubectl : Install required packages ------------------------------------ 10.62s 2025-06-05 19:40:15.055801 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.65s 2025-06-05 19:40:15.055811 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.33s 2025-06-05 19:40:15.055821 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.04s 2025-06-05 19:40:15.055830 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.71s 2025-06-05 19:40:15.055840 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.01s 2025-06-05 19:40:15.055850 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.18s 2025-06-05 19:40:15.055859 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.08s 2025-06-05 19:40:15.055869 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.35s 2025-06-05 19:40:15.055879 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.03s 2025-06-05 19:40:15.055889 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.96s 2025-06-05 19:40:15.055898 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.92s 2025-06-05 19:40:15.055908 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.82s 2025-06-05 19:40:15.055917 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.75s 2025-06-05 19:40:15.055927 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.68s 2025-06-05 19:40:15.055937 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.67s 2025-06-05 19:40:15.055946 | orchestrator | 2025-06-05 19:40:15 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:15.055956 | orchestrator | 2025-06-05 19:40:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:18.094450 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task df1ae73d-ae4b-4085-a29a-98b13fdc0e01 is in state STARTED 2025-06-05 19:40:18.094538 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:18.096079 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:18.096656 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:18.097887 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task 67e736d4-8816-466f-a297-84d616386075 is in state STARTED 2025-06-05 19:40:18.099078 | orchestrator | 2025-06-05 19:40:18 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:18.099100 | orchestrator | 2025-06-05 19:40:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:21.145626 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task df1ae73d-ae4b-4085-a29a-98b13fdc0e01 is in state STARTED 2025-06-05 19:40:21.145736 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:21.145751 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:21.145944 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:21.147019 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task 67e736d4-8816-466f-a297-84d616386075 is in state SUCCESS 2025-06-05 19:40:21.148410 | orchestrator | 2025-06-05 19:40:21 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:21.148433 | orchestrator | 2025-06-05 19:40:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:24.179648 | orchestrator | 2025-06-05 19:40:24 | INFO  | Task df1ae73d-ae4b-4085-a29a-98b13fdc0e01 is in state SUCCESS 2025-06-05 19:40:24.180593 | orchestrator | 2025-06-05 19:40:24 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:24.182219 | orchestrator | 2025-06-05 19:40:24 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:24.183468 | orchestrator | 2025-06-05 19:40:24 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:24.184484 | orchestrator | 2025-06-05 19:40:24 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:24.184547 | orchestrator | 2025-06-05 19:40:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:27.246830 | orchestrator | 2025-06-05 19:40:27 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:27.247453 | orchestrator | 2025-06-05 19:40:27 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:27.248760 | orchestrator | 2025-06-05 19:40:27 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:27.250117 | orchestrator | 2025-06-05 19:40:27 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:27.250148 | orchestrator | 2025-06-05 19:40:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:30.295310 | orchestrator | 2025-06-05 19:40:30 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:30.299177 | orchestrator | 2025-06-05 19:40:30 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:30.304873 | orchestrator | 2025-06-05 19:40:30 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:30.305000 | orchestrator | 2025-06-05 19:40:30 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:30.305535 | orchestrator | 2025-06-05 19:40:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:33.361100 | orchestrator | 2025-06-05 19:40:33 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:33.363216 | orchestrator | 2025-06-05 19:40:33 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:33.365606 | orchestrator | 2025-06-05 19:40:33 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:33.367677 | orchestrator | 2025-06-05 19:40:33 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:33.368329 | orchestrator | 2025-06-05 19:40:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:36.405756 | orchestrator | 2025-06-05 19:40:36 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:36.406895 | orchestrator | 2025-06-05 19:40:36 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:36.408351 | orchestrator | 2025-06-05 19:40:36 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:36.409424 | orchestrator | 2025-06-05 19:40:36 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:36.409743 | orchestrator | 2025-06-05 19:40:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:39.451175 | orchestrator | 2025-06-05 19:40:39 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:39.453308 | orchestrator | 2025-06-05 19:40:39 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:39.454459 | orchestrator | 2025-06-05 19:40:39 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:39.459469 | orchestrator | 2025-06-05 19:40:39 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:39.461495 | orchestrator | 2025-06-05 19:40:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:42.503444 | orchestrator | 2025-06-05 19:40:42 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:42.507200 | orchestrator | 2025-06-05 19:40:42 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:42.510483 | orchestrator | 2025-06-05 19:40:42 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:42.513153 | orchestrator | 2025-06-05 19:40:42 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:42.513845 | orchestrator | 2025-06-05 19:40:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:45.569671 | orchestrator | 2025-06-05 19:40:45 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:45.572229 | orchestrator | 2025-06-05 19:40:45 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:45.574836 | orchestrator | 2025-06-05 19:40:45 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:45.577592 | orchestrator | 2025-06-05 19:40:45 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:45.577620 | orchestrator | 2025-06-05 19:40:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:48.620876 | orchestrator | 2025-06-05 19:40:48 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:48.622373 | orchestrator | 2025-06-05 19:40:48 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:48.624175 | orchestrator | 2025-06-05 19:40:48 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:48.626207 | orchestrator | 2025-06-05 19:40:48 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:48.626236 | orchestrator | 2025-06-05 19:40:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:51.670093 | orchestrator | 2025-06-05 19:40:51 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:51.671124 | orchestrator | 2025-06-05 19:40:51 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:51.672667 | orchestrator | 2025-06-05 19:40:51 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:51.674149 | orchestrator | 2025-06-05 19:40:51 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:51.674177 | orchestrator | 2025-06-05 19:40:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:54.726912 | orchestrator | 2025-06-05 19:40:54 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:54.727494 | orchestrator | 2025-06-05 19:40:54 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:54.728473 | orchestrator | 2025-06-05 19:40:54 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:54.729451 | orchestrator | 2025-06-05 19:40:54 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:54.730983 | orchestrator | 2025-06-05 19:40:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:40:57.761659 | orchestrator | 2025-06-05 19:40:57 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state STARTED 2025-06-05 19:40:57.764453 | orchestrator | 2025-06-05 19:40:57 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:40:57.764503 | orchestrator | 2025-06-05 19:40:57 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:40:57.764532 | orchestrator | 2025-06-05 19:40:57 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:40:57.764543 | orchestrator | 2025-06-05 19:40:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:00.800823 | orchestrator | 2025-06-05 19:41:00.800979 | orchestrator | 2025-06-05 19:41:00.800995 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-05 19:41:00.801008 | orchestrator | 2025-06-05 19:41:00.801020 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-05 19:41:00.801032 | orchestrator | Thursday 05 June 2025 19:40:16 +0000 (0:00:00.126) 0:00:00.126 ********* 2025-06-05 19:41:00.801044 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-05 19:41:00.801056 | orchestrator | 2025-06-05 19:41:00.801068 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-05 19:41:00.801079 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:00.749) 0:00:00.876 ********* 2025-06-05 19:41:00.801090 | orchestrator | changed: [testbed-manager] 2025-06-05 19:41:00.801101 | orchestrator | 2025-06-05 19:41:00.801112 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-05 19:41:00.801123 | orchestrator | Thursday 05 June 2025 19:40:18 +0000 (0:00:01.063) 0:00:01.939 ********* 2025-06-05 19:41:00.801257 | orchestrator | changed: [testbed-manager] 2025-06-05 19:41:00.801303 | orchestrator | 2025-06-05 19:41:00.801315 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:41:00.801326 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:41:00.801342 | orchestrator | 2025-06-05 19:41:00.801355 | orchestrator | 2025-06-05 19:41:00.801368 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:41:00.801380 | orchestrator | Thursday 05 June 2025 19:40:18 +0000 (0:00:00.375) 0:00:02.314 ********* 2025-06-05 19:41:00.801392 | orchestrator | =============================================================================== 2025-06-05 19:41:00.801406 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-06-05 19:41:00.801418 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.75s 2025-06-05 19:41:00.801431 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2025-06-05 19:41:00.801443 | orchestrator | 2025-06-05 19:41:00.801455 | orchestrator | 2025-06-05 19:41:00.801468 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-05 19:41:00.801480 | orchestrator | 2025-06-05 19:41:00.801492 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-05 19:41:00.801506 | orchestrator | Thursday 05 June 2025 19:40:16 +0000 (0:00:00.147) 0:00:00.147 ********* 2025-06-05 19:41:00.801519 | orchestrator | ok: [testbed-manager] 2025-06-05 19:41:00.801532 | orchestrator | 2025-06-05 19:41:00.801545 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-05 19:41:00.801605 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:00.479) 0:00:00.627 ********* 2025-06-05 19:41:00.801647 | orchestrator | ok: [testbed-manager] 2025-06-05 19:41:00.801661 | orchestrator | 2025-06-05 19:41:00.801673 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-05 19:41:00.801685 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:00.447) 0:00:01.075 ********* 2025-06-05 19:41:00.801695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-05 19:41:00.801707 | orchestrator | 2025-06-05 19:41:00.801718 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-05 19:41:00.801728 | orchestrator | Thursday 05 June 2025 19:40:18 +0000 (0:00:00.724) 0:00:01.800 ********* 2025-06-05 19:41:00.801739 | orchestrator | changed: [testbed-manager] 2025-06-05 19:41:00.801750 | orchestrator | 2025-06-05 19:41:00.801761 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-05 19:41:00.801772 | orchestrator | Thursday 05 June 2025 19:40:19 +0000 (0:00:01.051) 0:00:02.851 ********* 2025-06-05 19:41:00.801782 | orchestrator | changed: [testbed-manager] 2025-06-05 19:41:00.801793 | orchestrator | 2025-06-05 19:41:00.801804 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-05 19:41:00.801815 | orchestrator | Thursday 05 June 2025 19:40:20 +0000 (0:00:00.743) 0:00:03.594 ********* 2025-06-05 19:41:00.801825 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-05 19:41:00.801836 | orchestrator | 2025-06-05 19:41:00.801847 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-05 19:41:00.801858 | orchestrator | Thursday 05 June 2025 19:40:22 +0000 (0:00:01.732) 0:00:05.327 ********* 2025-06-05 19:41:00.801869 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-05 19:41:00.801880 | orchestrator | 2025-06-05 19:41:00.801891 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-05 19:41:00.801902 | orchestrator | Thursday 05 June 2025 19:40:22 +0000 (0:00:00.746) 0:00:06.073 ********* 2025-06-05 19:41:00.801912 | orchestrator | ok: [testbed-manager] 2025-06-05 19:41:00.801923 | orchestrator | 2025-06-05 19:41:00.801934 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-05 19:41:00.801945 | orchestrator | Thursday 05 June 2025 19:40:23 +0000 (0:00:00.425) 0:00:06.498 ********* 2025-06-05 19:41:00.801955 | orchestrator | ok: [testbed-manager] 2025-06-05 19:41:00.801966 | orchestrator | 2025-06-05 19:41:00.801977 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:41:00.801988 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:41:00.801999 | orchestrator | 2025-06-05 19:41:00.802010 | orchestrator | 2025-06-05 19:41:00.802076 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:41:00.802088 | orchestrator | Thursday 05 June 2025 19:40:23 +0000 (0:00:00.249) 0:00:06.748 ********* 2025-06-05 19:41:00.802098 | orchestrator | =============================================================================== 2025-06-05 19:41:00.802109 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.73s 2025-06-05 19:41:00.802134 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.05s 2025-06-05 19:41:00.802145 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.75s 2025-06-05 19:41:00.802175 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.74s 2025-06-05 19:41:00.802186 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-06-05 19:41:00.802198 | orchestrator | Get home directory of operator user ------------------------------------- 0.48s 2025-06-05 19:41:00.802208 | orchestrator | Create .kube directory -------------------------------------------------- 0.45s 2025-06-05 19:41:00.802219 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.43s 2025-06-05 19:41:00.802230 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.25s 2025-06-05 19:41:00.802250 | orchestrator | 2025-06-05 19:41:00.802294 | orchestrator | 2025-06-05 19:41:00 | INFO  | Task c1d0e195-8a66-4578-a0bc-e2cf894338a8 is in state SUCCESS 2025-06-05 19:41:00.802534 | orchestrator | 2025-06-05 19:41:00.802624 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-05 19:41:00.802640 | orchestrator | 2025-06-05 19:41:00.802652 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-05 19:41:00.802664 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.166) 0:00:00.166 ********* 2025-06-05 19:41:00.802675 | orchestrator | ok: [localhost] => { 2025-06-05 19:41:00.802688 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-05 19:41:00.802700 | orchestrator | } 2025-06-05 19:41:00.802712 | orchestrator | 2025-06-05 19:41:00.802723 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-05 19:41:00.802734 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.068) 0:00:00.235 ********* 2025-06-05 19:41:00.802746 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-05 19:41:00.802758 | orchestrator | ...ignoring 2025-06-05 19:41:00.802770 | orchestrator | 2025-06-05 19:41:00.802781 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-05 19:41:00.802792 | orchestrator | Thursday 05 June 2025 19:38:46 +0000 (0:00:03.807) 0:00:04.042 ********* 2025-06-05 19:41:00.802803 | orchestrator | skipping: [localhost] 2025-06-05 19:41:00.802814 | orchestrator | 2025-06-05 19:41:00.802825 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-05 19:41:00.802836 | orchestrator | Thursday 05 June 2025 19:38:46 +0000 (0:00:00.097) 0:00:04.140 ********* 2025-06-05 19:41:00.802847 | orchestrator | ok: [localhost] 2025-06-05 19:41:00.802858 | orchestrator | 2025-06-05 19:41:00.802869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:41:00.802880 | orchestrator | 2025-06-05 19:41:00.802891 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:41:00.802902 | orchestrator | Thursday 05 June 2025 19:38:46 +0000 (0:00:00.163) 0:00:04.304 ********* 2025-06-05 19:41:00.802914 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:41:00.802925 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:41:00.802935 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:41:00.802946 | orchestrator | 2025-06-05 19:41:00.802958 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:41:00.802970 | orchestrator | Thursday 05 June 2025 19:38:46 +0000 (0:00:00.300) 0:00:04.604 ********* 2025-06-05 19:41:00.802981 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-05 19:41:00.802992 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-05 19:41:00.803003 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-05 19:41:00.803014 | orchestrator | 2025-06-05 19:41:00.803025 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-05 19:41:00.803036 | orchestrator | 2025-06-05 19:41:00.803048 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-05 19:41:00.803062 | orchestrator | Thursday 05 June 2025 19:38:47 +0000 (0:00:00.436) 0:00:05.040 ********* 2025-06-05 19:41:00.803074 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:41:00.803087 | orchestrator | 2025-06-05 19:41:00.803100 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-05 19:41:00.803113 | orchestrator | Thursday 05 June 2025 19:38:47 +0000 (0:00:00.477) 0:00:05.518 ********* 2025-06-05 19:41:00.803126 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:41:00.803139 | orchestrator | 2025-06-05 19:41:00.803151 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-05 19:41:00.803164 | orchestrator | Thursday 05 June 2025 19:38:48 +0000 (0:00:01.056) 0:00:06.575 ********* 2025-06-05 19:41:00.803202 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803216 | orchestrator | 2025-06-05 19:41:00.803228 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-05 19:41:00.803241 | orchestrator | Thursday 05 June 2025 19:38:49 +0000 (0:00:00.324) 0:00:06.900 ********* 2025-06-05 19:41:00.803254 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803284 | orchestrator | 2025-06-05 19:41:00.803297 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-05 19:41:00.803310 | orchestrator | Thursday 05 June 2025 19:38:49 +0000 (0:00:00.319) 0:00:07.220 ********* 2025-06-05 19:41:00.803322 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803335 | orchestrator | 2025-06-05 19:41:00.803348 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-05 19:41:00.803361 | orchestrator | Thursday 05 June 2025 19:38:49 +0000 (0:00:00.322) 0:00:07.543 ********* 2025-06-05 19:41:00.803387 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803400 | orchestrator | 2025-06-05 19:41:00.803423 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-05 19:41:00.803435 | orchestrator | Thursday 05 June 2025 19:38:50 +0000 (0:00:00.509) 0:00:08.052 ********* 2025-06-05 19:41:00.803460 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:41:00.803471 | orchestrator | 2025-06-05 19:41:00.803482 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-05 19:41:00.803493 | orchestrator | Thursday 05 June 2025 19:38:51 +0000 (0:00:00.788) 0:00:08.841 ********* 2025-06-05 19:41:00.803504 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:41:00.803515 | orchestrator | 2025-06-05 19:41:00.803526 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-05 19:41:00.803537 | orchestrator | Thursday 05 June 2025 19:38:52 +0000 (0:00:01.023) 0:00:09.864 ********* 2025-06-05 19:41:00.803548 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803559 | orchestrator | 2025-06-05 19:41:00.803570 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-05 19:41:00.803581 | orchestrator | Thursday 05 June 2025 19:38:53 +0000 (0:00:00.850) 0:00:10.714 ********* 2025-06-05 19:41:00.803592 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.803603 | orchestrator | 2025-06-05 19:41:00.803638 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-05 19:41:00.803650 | orchestrator | Thursday 05 June 2025 19:38:53 +0000 (0:00:00.403) 0:00:11.118 ********* 2025-06-05 19:41:00.803666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803722 | orchestrator | 2025-06-05 19:41:00.803739 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-05 19:41:00.803750 | orchestrator | Thursday 05 June 2025 19:38:54 +0000 (0:00:00.996) 0:00:12.114 ********* 2025-06-05 19:41:00.803771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.803819 | orchestrator | 2025-06-05 19:41:00.803830 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-05 19:41:00.803841 | orchestrator | Thursday 05 June 2025 19:38:57 +0000 (0:00:02.672) 0:00:14.787 ********* 2025-06-05 19:41:00.803852 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-05 19:41:00.803863 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-05 19:41:00.803874 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-05 19:41:00.803885 | orchestrator | 2025-06-05 19:41:00.803896 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-05 19:41:00.803908 | orchestrator | Thursday 05 June 2025 19:38:58 +0000 (0:00:01.421) 0:00:16.209 ********* 2025-06-05 19:41:00.803919 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-05 19:41:00.803934 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-05 19:41:00.803945 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-05 19:41:00.803956 | orchestrator | 2025-06-05 19:41:00.803967 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-05 19:41:00.803978 | orchestrator | Thursday 05 June 2025 19:39:00 +0000 (0:00:01.696) 0:00:17.905 ********* 2025-06-05 19:41:00.803989 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-05 19:41:00.804000 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-05 19:41:00.804012 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-05 19:41:00.804023 | orchestrator | 2025-06-05 19:41:00.804040 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-05 19:41:00.804051 | orchestrator | Thursday 05 June 2025 19:39:01 +0000 (0:00:01.465) 0:00:19.371 ********* 2025-06-05 19:41:00.804062 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-05 19:41:00.804073 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-05 19:41:00.804084 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-05 19:41:00.804095 | orchestrator | 2025-06-05 19:41:00.804106 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-05 19:41:00.804117 | orchestrator | Thursday 05 June 2025 19:39:03 +0000 (0:00:01.971) 0:00:21.342 ********* 2025-06-05 19:41:00.804135 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-05 19:41:00.804146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-05 19:41:00.804158 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-05 19:41:00.804169 | orchestrator | 2025-06-05 19:41:00.804180 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-05 19:41:00.804191 | orchestrator | Thursday 05 June 2025 19:39:05 +0000 (0:00:01.623) 0:00:22.966 ********* 2025-06-05 19:41:00.804201 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-05 19:41:00.804212 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-05 19:41:00.804223 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-05 19:41:00.804234 | orchestrator | 2025-06-05 19:41:00.804245 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-05 19:41:00.804256 | orchestrator | Thursday 05 June 2025 19:39:06 +0000 (0:00:01.532) 0:00:24.498 ********* 2025-06-05 19:41:00.804286 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.804298 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:41:00.804310 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:41:00.804320 | orchestrator | 2025-06-05 19:41:00.804332 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-05 19:41:00.804343 | orchestrator | Thursday 05 June 2025 19:39:07 +0000 (0:00:00.333) 0:00:24.832 ********* 2025-06-05 19:41:00.804355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.804374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.804395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:41:00.804415 | orchestrator | 2025-06-05 19:41:00.804426 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-05 19:41:00.804437 | orchestrator | Thursday 05 June 2025 19:39:08 +0000 (0:00:01.392) 0:00:26.224 ********* 2025-06-05 19:41:00.804449 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:41:00.804460 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:41:00.804471 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:41:00.804482 | orchestrator | 2025-06-05 19:41:00.804493 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-05 19:41:00.804505 | orchestrator | Thursday 05 June 2025 19:39:09 +0000 (0:00:00.986) 0:00:27.210 ********* 2025-06-05 19:41:00.804516 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:41:00.804527 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:41:00.804538 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:41:00.804549 | orchestrator | 2025-06-05 19:41:00.804560 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-05 19:41:00.804572 | orchestrator | Thursday 05 June 2025 19:39:15 +0000 (0:00:05.936) 0:00:33.147 ********* 2025-06-05 19:41:00.804583 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:41:00.804594 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:41:00.804605 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:41:00.804616 | orchestrator | 2025-06-05 19:41:00.804628 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-05 19:41:00.804639 | orchestrator | 2025-06-05 19:41:00.804650 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-05 19:41:00.804661 | orchestrator | Thursday 05 June 2025 19:39:15 +0000 (0:00:00.363) 0:00:33.511 ********* 2025-06-05 19:41:00.804673 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:41:00.804684 | orchestrator | 2025-06-05 19:41:00.804695 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-05 19:41:00.804706 | orchestrator | Thursday 05 June 2025 19:39:16 +0000 (0:00:00.655) 0:00:34.167 ********* 2025-06-05 19:41:00.804717 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:41:00.804728 | orchestrator | 2025-06-05 19:41:00.804740 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-05 19:41:00.804751 | orchestrator | Thursday 05 June 2025 19:39:16 +0000 (0:00:00.236) 0:00:34.403 ********* 2025-06-05 19:41:00.804762 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:41:00.804773 | orchestrator | 2025-06-05 19:41:00.804784 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-05 19:41:00.804795 | orchestrator | Thursday 05 June 2025 19:39:18 +0000 (0:00:01.977) 0:00:36.380 ********* 2025-06-05 19:41:00.804806 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:41:00.804817 | orchestrator | 2025-06-05 19:41:00.804828 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-05 19:41:00.804839 | orchestrator | 2025-06-05 19:41:00.804850 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-05 19:41:00.804861 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:58.668) 0:01:35.049 ********* 2025-06-05 19:41:00.804872 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:41:00.804890 | orchestrator | 2025-06-05 19:41:00.804902 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-05 19:41:00.804913 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:00.616) 0:01:35.665 ********* 2025-06-05 19:41:00.804924 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:41:00.804935 | orchestrator | 2025-06-05 19:41:00.804946 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-05 19:41:00.804957 | orchestrator | Thursday 05 June 2025 19:40:18 +0000 (0:00:00.342) 0:01:36.007 ********* 2025-06-05 19:41:00.804968 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:41:00.804979 | orchestrator | 2025-06-05 19:41:00.804990 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-05 19:41:00.805001 | orchestrator | Thursday 05 June 2025 19:40:20 +0000 (0:00:01.904) 0:01:37.912 ********* 2025-06-05 19:41:00.805012 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:41:00.805024 | orchestrator | 2025-06-05 19:41:00.805034 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-05 19:41:00.805045 | orchestrator | 2025-06-05 19:41:00.805056 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-05 19:41:00.805067 | orchestrator | Thursday 05 June 2025 19:40:36 +0000 (0:00:16.636) 0:01:54.548 ********* 2025-06-05 19:41:00.805079 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:41:00.805091 | orchestrator | 2025-06-05 19:41:00.805101 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-05 19:41:00.805113 | orchestrator | Thursday 05 June 2025 19:40:37 +0000 (0:00:00.645) 0:01:55.194 ********* 2025-06-05 19:41:00.805124 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:41:00.805135 | orchestrator | 2025-06-05 19:41:00.805146 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-05 19:41:00.805163 | orchestrator | Thursday 05 June 2025 19:40:37 +0000 (0:00:00.249) 0:01:55.444 ********* 2025-06-05 19:41:00.805175 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:41:00.805186 | orchestrator | 2025-06-05 19:41:00.805198 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-05 19:41:00.805209 | orchestrator | Thursday 05 June 2025 19:40:39 +0000 (0:00:01.672) 0:01:57.116 ********* 2025-06-05 19:41:00.805219 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:41:00.805230 | orchestrator | 2025-06-05 19:41:00.805241 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-05 19:41:00.805253 | orchestrator | 2025-06-05 19:41:00.805279 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-05 19:41:00.805291 | orchestrator | Thursday 05 June 2025 19:40:55 +0000 (0:00:16.222) 0:02:13.339 ********* 2025-06-05 19:41:00.805301 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:41:00.805312 | orchestrator | 2025-06-05 19:41:00.805323 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-05 19:41:00.805334 | orchestrator | Thursday 05 June 2025 19:40:56 +0000 (0:00:00.983) 0:02:14.323 ********* 2025-06-05 19:41:00.805345 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-05 19:41:00.805356 | orchestrator | enable_outward_rabbitmq_True 2025-06-05 19:41:00.805367 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-05 19:41:00.805378 | orchestrator | outward_rabbitmq_restart 2025-06-05 19:41:00.805389 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:41:00.805401 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:41:00.805412 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:41:00.805422 | orchestrator | 2025-06-05 19:41:00.805434 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-05 19:41:00.805445 | orchestrator | skipping: no hosts matched 2025-06-05 19:41:00.805456 | orchestrator | 2025-06-05 19:41:00.805467 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-05 19:41:00.805477 | orchestrator | skipping: no hosts matched 2025-06-05 19:41:00.805488 | orchestrator | 2025-06-05 19:41:00.805506 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-05 19:41:00.805517 | orchestrator | skipping: no hosts matched 2025-06-05 19:41:00.805528 | orchestrator | 2025-06-05 19:41:00.805539 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:41:00.805551 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-05 19:41:00.805562 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-05 19:41:00.805574 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:41:00.805656 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:41:00.805676 | orchestrator | 2025-06-05 19:41:00.805687 | orchestrator | 2025-06-05 19:41:00.805698 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:41:00.805709 | orchestrator | Thursday 05 June 2025 19:40:59 +0000 (0:00:02.594) 0:02:16.918 ********* 2025-06-05 19:41:00.805720 | orchestrator | =============================================================================== 2025-06-05 19:41:00.805731 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 91.53s 2025-06-05 19:41:00.805743 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.94s 2025-06-05 19:41:00.805754 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.55s 2025-06-05 19:41:00.805764 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.81s 2025-06-05 19:41:00.805776 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.67s 2025-06-05 19:41:00.805787 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.59s 2025-06-05 19:41:00.805799 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.97s 2025-06-05 19:41:00.805810 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.92s 2025-06-05 19:41:00.805821 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.70s 2025-06-05 19:41:00.805832 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.62s 2025-06-05 19:41:00.805843 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.53s 2025-06-05 19:41:00.805855 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.47s 2025-06-05 19:41:00.805870 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.42s 2025-06-05 19:41:00.805881 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.39s 2025-06-05 19:41:00.805892 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.06s 2025-06-05 19:41:00.805903 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2025-06-05 19:41:00.805915 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.00s 2025-06-05 19:41:00.805926 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-06-05 19:41:00.805937 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 0.98s 2025-06-05 19:41:00.805948 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 0.85s 2025-06-05 19:41:00.805967 | orchestrator | 2025-06-05 19:41:00 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:00.806921 | orchestrator | 2025-06-05 19:41:00 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:00.809133 | orchestrator | 2025-06-05 19:41:00 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:00.809178 | orchestrator | 2025-06-05 19:41:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:03.844821 | orchestrator | 2025-06-05 19:41:03 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:03.844934 | orchestrator | 2025-06-05 19:41:03 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:03.844949 | orchestrator | 2025-06-05 19:41:03 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:03.844961 | orchestrator | 2025-06-05 19:41:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:06.875720 | orchestrator | 2025-06-05 19:41:06 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:06.876388 | orchestrator | 2025-06-05 19:41:06 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:06.876985 | orchestrator | 2025-06-05 19:41:06 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:06.877017 | orchestrator | 2025-06-05 19:41:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:09.905443 | orchestrator | 2025-06-05 19:41:09 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:09.908062 | orchestrator | 2025-06-05 19:41:09 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:09.908952 | orchestrator | 2025-06-05 19:41:09 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:09.908991 | orchestrator | 2025-06-05 19:41:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:12.948526 | orchestrator | 2025-06-05 19:41:12 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:12.950438 | orchestrator | 2025-06-05 19:41:12 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:12.952486 | orchestrator | 2025-06-05 19:41:12 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:12.952524 | orchestrator | 2025-06-05 19:41:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:15.987935 | orchestrator | 2025-06-05 19:41:15 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:15.988136 | orchestrator | 2025-06-05 19:41:15 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:15.988835 | orchestrator | 2025-06-05 19:41:15 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:15.989019 | orchestrator | 2025-06-05 19:41:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:19.024598 | orchestrator | 2025-06-05 19:41:19 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:19.024688 | orchestrator | 2025-06-05 19:41:19 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:19.024703 | orchestrator | 2025-06-05 19:41:19 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:19.024715 | orchestrator | 2025-06-05 19:41:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:22.053721 | orchestrator | 2025-06-05 19:41:22 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:22.054892 | orchestrator | 2025-06-05 19:41:22 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:22.057456 | orchestrator | 2025-06-05 19:41:22 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:22.057532 | orchestrator | 2025-06-05 19:41:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:25.098305 | orchestrator | 2025-06-05 19:41:25 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:25.099653 | orchestrator | 2025-06-05 19:41:25 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:25.101192 | orchestrator | 2025-06-05 19:41:25 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:25.101604 | orchestrator | 2025-06-05 19:41:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:28.132707 | orchestrator | 2025-06-05 19:41:28 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:28.133403 | orchestrator | 2025-06-05 19:41:28 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:28.133435 | orchestrator | 2025-06-05 19:41:28 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:28.133449 | orchestrator | 2025-06-05 19:41:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:31.167771 | orchestrator | 2025-06-05 19:41:31 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:31.168465 | orchestrator | 2025-06-05 19:41:31 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:31.169552 | orchestrator | 2025-06-05 19:41:31 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:31.169674 | orchestrator | 2025-06-05 19:41:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:34.217198 | orchestrator | 2025-06-05 19:41:34 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:34.218170 | orchestrator | 2025-06-05 19:41:34 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:34.218224 | orchestrator | 2025-06-05 19:41:34 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:34.218233 | orchestrator | 2025-06-05 19:41:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:37.248355 | orchestrator | 2025-06-05 19:41:37 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:37.249522 | orchestrator | 2025-06-05 19:41:37 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:37.250471 | orchestrator | 2025-06-05 19:41:37 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:37.250775 | orchestrator | 2025-06-05 19:41:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:40.271587 | orchestrator | 2025-06-05 19:41:40 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:40.273561 | orchestrator | 2025-06-05 19:41:40 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:40.275329 | orchestrator | 2025-06-05 19:41:40 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:40.275423 | orchestrator | 2025-06-05 19:41:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:43.302684 | orchestrator | 2025-06-05 19:41:43 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:43.303075 | orchestrator | 2025-06-05 19:41:43 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:43.306122 | orchestrator | 2025-06-05 19:41:43 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:43.306166 | orchestrator | 2025-06-05 19:41:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:46.360135 | orchestrator | 2025-06-05 19:41:46 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:46.361202 | orchestrator | 2025-06-05 19:41:46 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:46.362760 | orchestrator | 2025-06-05 19:41:46 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:46.363040 | orchestrator | 2025-06-05 19:41:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:49.407799 | orchestrator | 2025-06-05 19:41:49 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:49.410487 | orchestrator | 2025-06-05 19:41:49 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:49.410537 | orchestrator | 2025-06-05 19:41:49 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:49.410550 | orchestrator | 2025-06-05 19:41:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:52.443866 | orchestrator | 2025-06-05 19:41:52 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:52.445214 | orchestrator | 2025-06-05 19:41:52 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:52.446676 | orchestrator | 2025-06-05 19:41:52 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:52.446714 | orchestrator | 2025-06-05 19:41:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:55.489091 | orchestrator | 2025-06-05 19:41:55 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:55.489684 | orchestrator | 2025-06-05 19:41:55 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:55.490445 | orchestrator | 2025-06-05 19:41:55 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:55.490479 | orchestrator | 2025-06-05 19:41:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:41:58.527639 | orchestrator | 2025-06-05 19:41:58 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:41:58.529211 | orchestrator | 2025-06-05 19:41:58 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:41:58.530700 | orchestrator | 2025-06-05 19:41:58 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:41:58.531041 | orchestrator | 2025-06-05 19:41:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:01.574524 | orchestrator | 2025-06-05 19:42:01 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:01.578894 | orchestrator | 2025-06-05 19:42:01 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:01.581733 | orchestrator | 2025-06-05 19:42:01 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:42:01.582387 | orchestrator | 2025-06-05 19:42:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:04.644498 | orchestrator | 2025-06-05 19:42:04 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:04.646479 | orchestrator | 2025-06-05 19:42:04 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:04.650489 | orchestrator | 2025-06-05 19:42:04 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:42:04.650803 | orchestrator | 2025-06-05 19:42:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:07.702101 | orchestrator | 2025-06-05 19:42:07 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:07.702322 | orchestrator | 2025-06-05 19:42:07 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:07.702823 | orchestrator | 2025-06-05 19:42:07 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:42:07.702882 | orchestrator | 2025-06-05 19:42:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:10.744745 | orchestrator | 2025-06-05 19:42:10 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:10.744875 | orchestrator | 2025-06-05 19:42:10 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:10.745002 | orchestrator | 2025-06-05 19:42:10 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state STARTED 2025-06-05 19:42:10.745020 | orchestrator | 2025-06-05 19:42:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:13.791645 | orchestrator | 2025-06-05 19:42:13 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:13.796362 | orchestrator | 2025-06-05 19:42:13 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:13.799352 | orchestrator | 2025-06-05 19:42:13 | INFO  | Task 14c31bdf-3faa-4ea2-a572-ceaf0377782e is in state SUCCESS 2025-06-05 19:42:13.799889 | orchestrator | 2025-06-05 19:42:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:13.814514 | orchestrator | 2025-06-05 19:42:13.814568 | orchestrator | 2025-06-05 19:42:13.814582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:42:13.814594 | orchestrator | 2025-06-05 19:42:13.814606 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:42:13.814618 | orchestrator | Thursday 05 June 2025 19:39:29 +0000 (0:00:00.167) 0:00:00.168 ********* 2025-06-05 19:42:13.814629 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:42:13.814641 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:42:13.814666 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:42:13.814742 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.814754 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.814765 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.814776 | orchestrator | 2025-06-05 19:42:13.814854 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:42:13.814924 | orchestrator | Thursday 05 June 2025 19:39:30 +0000 (0:00:00.775) 0:00:00.943 ********* 2025-06-05 19:42:13.814936 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-05 19:42:13.814948 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-05 19:42:13.814959 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-05 19:42:13.814970 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-05 19:42:13.814982 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-05 19:42:13.814992 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-05 19:42:13.815003 | orchestrator | 2025-06-05 19:42:13.815015 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-05 19:42:13.815026 | orchestrator | 2025-06-05 19:42:13.815037 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-05 19:42:13.815048 | orchestrator | Thursday 05 June 2025 19:39:31 +0000 (0:00:00.867) 0:00:01.811 ********* 2025-06-05 19:42:13.815060 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:42:13.815072 | orchestrator | 2025-06-05 19:42:13.815084 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-05 19:42:13.815095 | orchestrator | Thursday 05 June 2025 19:39:32 +0000 (0:00:01.133) 0:00:02.944 ********* 2025-06-05 19:42:13.815110 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815149 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815163 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815250 | orchestrator | 2025-06-05 19:42:13.815264 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-05 19:42:13.815277 | orchestrator | Thursday 05 June 2025 19:39:34 +0000 (0:00:01.496) 0:00:04.441 ********* 2025-06-05 19:42:13.815295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815309 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815382 | orchestrator | 2025-06-05 19:42:13.815395 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-05 19:42:13.815408 | orchestrator | Thursday 05 June 2025 19:39:35 +0000 (0:00:01.781) 0:00:06.222 ********* 2025-06-05 19:42:13.815421 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815514 | orchestrator | 2025-06-05 19:42:13.815525 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-05 19:42:13.815536 | orchestrator | Thursday 05 June 2025 19:39:36 +0000 (0:00:01.116) 0:00:07.339 ********* 2025-06-05 19:42:13.815547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815627 | orchestrator | 2025-06-05 19:42:13.815638 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-05 19:42:13.815649 | orchestrator | Thursday 05 June 2025 19:39:38 +0000 (0:00:01.581) 0:00:08.920 ********* 2025-06-05 19:42:13.815660 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815677 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.815734 | orchestrator | 2025-06-05 19:42:13.815745 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-05 19:42:13.815756 | orchestrator | Thursday 05 June 2025 19:39:39 +0000 (0:00:01.398) 0:00:10.319 ********* 2025-06-05 19:42:13.815767 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:42:13.815779 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:42:13.815790 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:42:13.815801 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.815812 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.815823 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.815834 | orchestrator | 2025-06-05 19:42:13.815845 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-05 19:42:13.815856 | orchestrator | Thursday 05 June 2025 19:39:42 +0000 (0:00:02.590) 0:00:12.909 ********* 2025-06-05 19:42:13.815867 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-05 19:42:13.815878 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-05 19:42:13.815889 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-05 19:42:13.815905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-05 19:42:13.815922 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.815933 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-05 19:42:13.815949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-05 19:42:13.815960 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.815971 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.815982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.815993 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816005 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.816016 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-05 19:42:13.816027 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816038 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816060 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816072 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816083 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816094 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-05 19:42:13.816105 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816116 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816126 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816137 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816148 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816159 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-05 19:42:13.816170 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816192 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816203 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816214 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816239 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-05 19:42:13.816250 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816261 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816278 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-05 19:42:13.816290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816301 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-05 19:42:13.816312 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-05 19:42:13.816323 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-05 19:42:13.816334 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-05 19:42:13.816350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-05 19:42:13.816362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-05 19:42:13.816373 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-05 19:42:13.816389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-05 19:42:13.816400 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-05 19:42:13.816411 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-05 19:42:13.816422 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-05 19:42:13.816433 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-05 19:42:13.816444 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-05 19:42:13.816455 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-05 19:42:13.816466 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-05 19:42:13.816477 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-05 19:42:13.816489 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-05 19:42:13.816499 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-05 19:42:13.816510 | orchestrator | 2025-06-05 19:42:13.816521 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816532 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:23.026) 0:00:35.936 ********* 2025-06-05 19:42:13.816543 | orchestrator | 2025-06-05 19:42:13.816554 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816565 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.079) 0:00:36.016 ********* 2025-06-05 19:42:13.816576 | orchestrator | 2025-06-05 19:42:13.816587 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816598 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.059) 0:00:36.075 ********* 2025-06-05 19:42:13.816608 | orchestrator | 2025-06-05 19:42:13.816619 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816636 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.059) 0:00:36.134 ********* 2025-06-05 19:42:13.816647 | orchestrator | 2025-06-05 19:42:13.816658 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816669 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.059) 0:00:36.194 ********* 2025-06-05 19:42:13.816680 | orchestrator | 2025-06-05 19:42:13.816691 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-05 19:42:13.816701 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.057) 0:00:36.251 ********* 2025-06-05 19:42:13.816712 | orchestrator | 2025-06-05 19:42:13.816723 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-05 19:42:13.816734 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.124) 0:00:36.376 ********* 2025-06-05 19:42:13.816744 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:42:13.816755 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:42:13.816766 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:42:13.816777 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.816788 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.816798 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.816809 | orchestrator | 2025-06-05 19:42:13.816820 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-05 19:42:13.816831 | orchestrator | Thursday 05 June 2025 19:40:08 +0000 (0:00:02.856) 0:00:39.232 ********* 2025-06-05 19:42:13.816842 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.816853 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.816864 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:42:13.816874 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:42:13.816885 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:42:13.816896 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.816907 | orchestrator | 2025-06-05 19:42:13.816918 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-05 19:42:13.816928 | orchestrator | 2025-06-05 19:42:13.816939 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-05 19:42:13.816950 | orchestrator | Thursday 05 June 2025 19:40:51 +0000 (0:00:43.099) 0:01:22.331 ********* 2025-06-05 19:42:13.816961 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:42:13.816972 | orchestrator | 2025-06-05 19:42:13.816983 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-05 19:42:13.816994 | orchestrator | Thursday 05 June 2025 19:40:52 +0000 (0:00:00.489) 0:01:22.821 ********* 2025-06-05 19:42:13.817005 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:42:13.817016 | orchestrator | 2025-06-05 19:42:13.817032 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-05 19:42:13.817043 | orchestrator | Thursday 05 June 2025 19:40:53 +0000 (0:00:00.648) 0:01:23.469 ********* 2025-06-05 19:42:13.817054 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.817065 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.817076 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.817087 | orchestrator | 2025-06-05 19:42:13.817103 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-05 19:42:13.817114 | orchestrator | Thursday 05 June 2025 19:40:53 +0000 (0:00:00.756) 0:01:24.226 ********* 2025-06-05 19:42:13.817125 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.817136 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.817147 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.817157 | orchestrator | 2025-06-05 19:42:13.817168 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-05 19:42:13.817179 | orchestrator | Thursday 05 June 2025 19:40:54 +0000 (0:00:00.301) 0:01:24.528 ********* 2025-06-05 19:42:13.817190 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.817201 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.817211 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.817261 | orchestrator | 2025-06-05 19:42:13.817273 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-05 19:42:13.817284 | orchestrator | Thursday 05 June 2025 19:40:54 +0000 (0:00:00.312) 0:01:24.840 ********* 2025-06-05 19:42:13.817295 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.817305 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.817316 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.817327 | orchestrator | 2025-06-05 19:42:13.817338 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-05 19:42:13.817348 | orchestrator | Thursday 05 June 2025 19:40:54 +0000 (0:00:00.489) 0:01:25.330 ********* 2025-06-05 19:42:13.817359 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.817370 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.817380 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.817391 | orchestrator | 2025-06-05 19:42:13.817402 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-05 19:42:13.817413 | orchestrator | Thursday 05 June 2025 19:40:55 +0000 (0:00:00.352) 0:01:25.683 ********* 2025-06-05 19:42:13.817424 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817435 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817445 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817456 | orchestrator | 2025-06-05 19:42:13.817467 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-05 19:42:13.817478 | orchestrator | Thursday 05 June 2025 19:40:55 +0000 (0:00:00.412) 0:01:26.095 ********* 2025-06-05 19:42:13.817489 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817499 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817510 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817521 | orchestrator | 2025-06-05 19:42:13.817532 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-05 19:42:13.817543 | orchestrator | Thursday 05 June 2025 19:40:56 +0000 (0:00:00.352) 0:01:26.448 ********* 2025-06-05 19:42:13.817553 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817564 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817575 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817586 | orchestrator | 2025-06-05 19:42:13.817596 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-05 19:42:13.817607 | orchestrator | Thursday 05 June 2025 19:40:56 +0000 (0:00:00.616) 0:01:27.065 ********* 2025-06-05 19:42:13.817618 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817629 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817639 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817650 | orchestrator | 2025-06-05 19:42:13.817661 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-05 19:42:13.817671 | orchestrator | Thursday 05 June 2025 19:40:56 +0000 (0:00:00.329) 0:01:27.394 ********* 2025-06-05 19:42:13.817682 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817693 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817704 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817715 | orchestrator | 2025-06-05 19:42:13.817725 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-05 19:42:13.817736 | orchestrator | Thursday 05 June 2025 19:40:57 +0000 (0:00:00.276) 0:01:27.670 ********* 2025-06-05 19:42:13.817747 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817758 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817769 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817779 | orchestrator | 2025-06-05 19:42:13.817790 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-05 19:42:13.817801 | orchestrator | Thursday 05 June 2025 19:40:57 +0000 (0:00:00.289) 0:01:27.960 ********* 2025-06-05 19:42:13.817812 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817823 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817833 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817844 | orchestrator | 2025-06-05 19:42:13.817855 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-05 19:42:13.817872 | orchestrator | Thursday 05 June 2025 19:40:58 +0000 (0:00:00.498) 0:01:28.458 ********* 2025-06-05 19:42:13.817883 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817894 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817905 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817915 | orchestrator | 2025-06-05 19:42:13.817926 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-05 19:42:13.817937 | orchestrator | Thursday 05 June 2025 19:40:58 +0000 (0:00:00.313) 0:01:28.772 ********* 2025-06-05 19:42:13.817948 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.817958 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.817969 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.817980 | orchestrator | 2025-06-05 19:42:13.817990 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-05 19:42:13.818001 | orchestrator | Thursday 05 June 2025 19:40:58 +0000 (0:00:00.275) 0:01:29.047 ********* 2025-06-05 19:42:13.818012 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818075 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818086 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818097 | orchestrator | 2025-06-05 19:42:13.818114 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-05 19:42:13.818126 | orchestrator | Thursday 05 June 2025 19:40:58 +0000 (0:00:00.327) 0:01:29.374 ********* 2025-06-05 19:42:13.818137 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818148 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818159 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818170 | orchestrator | 2025-06-05 19:42:13.818180 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-05 19:42:13.818197 | orchestrator | Thursday 05 June 2025 19:40:59 +0000 (0:00:00.521) 0:01:29.896 ********* 2025-06-05 19:42:13.818208 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818219 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818245 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818256 | orchestrator | 2025-06-05 19:42:13.818267 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-05 19:42:13.818278 | orchestrator | Thursday 05 June 2025 19:40:59 +0000 (0:00:00.341) 0:01:30.237 ********* 2025-06-05 19:42:13.818289 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:42:13.818300 | orchestrator | 2025-06-05 19:42:13.818311 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-05 19:42:13.818322 | orchestrator | Thursday 05 June 2025 19:41:00 +0000 (0:00:00.534) 0:01:30.772 ********* 2025-06-05 19:42:13.818333 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.818344 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.818354 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.818365 | orchestrator | 2025-06-05 19:42:13.818377 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-05 19:42:13.818388 | orchestrator | Thursday 05 June 2025 19:41:01 +0000 (0:00:00.815) 0:01:31.587 ********* 2025-06-05 19:42:13.818399 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.818410 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.818420 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.818431 | orchestrator | 2025-06-05 19:42:13.818442 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-05 19:42:13.818468 | orchestrator | Thursday 05 June 2025 19:41:01 +0000 (0:00:00.396) 0:01:31.984 ********* 2025-06-05 19:42:13.818480 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818491 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818502 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818513 | orchestrator | 2025-06-05 19:42:13.818523 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-05 19:42:13.818534 | orchestrator | Thursday 05 June 2025 19:41:01 +0000 (0:00:00.313) 0:01:32.298 ********* 2025-06-05 19:42:13.818552 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818563 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818574 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818585 | orchestrator | 2025-06-05 19:42:13.818595 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-05 19:42:13.818606 | orchestrator | Thursday 05 June 2025 19:41:02 +0000 (0:00:00.304) 0:01:32.602 ********* 2025-06-05 19:42:13.818617 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818628 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818639 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818649 | orchestrator | 2025-06-05 19:42:13.818660 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-05 19:42:13.818671 | orchestrator | Thursday 05 June 2025 19:41:02 +0000 (0:00:00.532) 0:01:33.135 ********* 2025-06-05 19:42:13.818682 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818693 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818703 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818714 | orchestrator | 2025-06-05 19:42:13.818725 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-05 19:42:13.818736 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:00.310) 0:01:33.445 ********* 2025-06-05 19:42:13.818746 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818757 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818768 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818779 | orchestrator | 2025-06-05 19:42:13.818789 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-05 19:42:13.818800 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:00.332) 0:01:33.778 ********* 2025-06-05 19:42:13.818811 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.818822 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.818833 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.818843 | orchestrator | 2025-06-05 19:42:13.818854 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-05 19:42:13.818865 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:00.334) 0:01:34.112 ********* 2025-06-05 19:42:13.818877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.818991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819003 | orchestrator | 2025-06-05 19:42:13.819014 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-05 19:42:13.819025 | orchestrator | Thursday 05 June 2025 19:41:05 +0000 (0:00:01.573) 0:01:35.686 ********* 2025-06-05 19:42:13.819036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819134 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819157 | orchestrator | 2025-06-05 19:42:13.819168 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-05 19:42:13.819179 | orchestrator | Thursday 05 June 2025 19:41:09 +0000 (0:00:04.480) 0:01:40.166 ********* 2025-06-05 19:42:13.819190 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819248 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819294 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.819328 | orchestrator | 2025-06-05 19:42:13.819340 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.819351 | orchestrator | Thursday 05 June 2025 19:41:11 +0000 (0:00:02.160) 0:01:42.327 ********* 2025-06-05 19:42:13.819362 | orchestrator | 2025-06-05 19:42:13.819373 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.819384 | orchestrator | Thursday 05 June 2025 19:41:11 +0000 (0:00:00.061) 0:01:42.388 ********* 2025-06-05 19:42:13.819395 | orchestrator | 2025-06-05 19:42:13.819406 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.819417 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.059) 0:01:42.448 ********* 2025-06-05 19:42:13.819428 | orchestrator | 2025-06-05 19:42:13.819439 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-05 19:42:13.819450 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.061) 0:01:42.509 ********* 2025-06-05 19:42:13.819461 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.819472 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.819483 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.819494 | orchestrator | 2025-06-05 19:42:13.819505 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-05 19:42:13.819516 | orchestrator | Thursday 05 June 2025 19:41:19 +0000 (0:00:07.467) 0:01:49.976 ********* 2025-06-05 19:42:13.819527 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.819538 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.819549 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.819559 | orchestrator | 2025-06-05 19:42:13.819570 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-05 19:42:13.819587 | orchestrator | Thursday 05 June 2025 19:41:27 +0000 (0:00:07.455) 0:01:57.431 ********* 2025-06-05 19:42:13.819598 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.819609 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.819620 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.819631 | orchestrator | 2025-06-05 19:42:13.819642 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-05 19:42:13.819653 | orchestrator | Thursday 05 June 2025 19:41:34 +0000 (0:00:07.865) 0:02:05.297 ********* 2025-06-05 19:42:13.819664 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.819674 | orchestrator | 2025-06-05 19:42:13.819685 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-05 19:42:13.819696 | orchestrator | Thursday 05 June 2025 19:41:34 +0000 (0:00:00.101) 0:02:05.399 ********* 2025-06-05 19:42:13.819707 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.819718 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.819729 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.819740 | orchestrator | 2025-06-05 19:42:13.819757 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-05 19:42:13.819768 | orchestrator | Thursday 05 June 2025 19:41:35 +0000 (0:00:00.769) 0:02:06.169 ********* 2025-06-05 19:42:13.819779 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.819790 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.819801 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.819812 | orchestrator | 2025-06-05 19:42:13.819823 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-05 19:42:13.819834 | orchestrator | Thursday 05 June 2025 19:41:36 +0000 (0:00:00.794) 0:02:06.964 ********* 2025-06-05 19:42:13.819845 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.819856 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.819867 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.819878 | orchestrator | 2025-06-05 19:42:13.819889 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-05 19:42:13.819900 | orchestrator | Thursday 05 June 2025 19:41:37 +0000 (0:00:00.775) 0:02:07.740 ********* 2025-06-05 19:42:13.819911 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.819923 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.819933 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.819944 | orchestrator | 2025-06-05 19:42:13.819955 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-05 19:42:13.819966 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:00.700) 0:02:08.440 ********* 2025-06-05 19:42:13.819977 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.819988 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.819999 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.820010 | orchestrator | 2025-06-05 19:42:13.820021 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-05 19:42:13.820032 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:00.739) 0:02:09.179 ********* 2025-06-05 19:42:13.820043 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.820054 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.820065 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.820076 | orchestrator | 2025-06-05 19:42:13.820087 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-05 19:42:13.820098 | orchestrator | Thursday 05 June 2025 19:41:39 +0000 (0:00:01.091) 0:02:10.270 ********* 2025-06-05 19:42:13.820109 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.820120 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.820131 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.820142 | orchestrator | 2025-06-05 19:42:13.820153 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-05 19:42:13.820164 | orchestrator | Thursday 05 June 2025 19:41:40 +0000 (0:00:00.266) 0:02:10.537 ********* 2025-06-05 19:42:13.820205 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820253 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820265 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820277 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820288 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820310 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820323 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820334 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820345 | orchestrator | 2025-06-05 19:42:13.820357 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-05 19:42:13.820368 | orchestrator | Thursday 05 June 2025 19:41:41 +0000 (0:00:01.589) 0:02:12.126 ********* 2025-06-05 19:42:13.820379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820397 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820408 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820480 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820503 | orchestrator | 2025-06-05 19:42:13.820514 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-05 19:42:13.820525 | orchestrator | Thursday 05 June 2025 19:41:45 +0000 (0:00:04.207) 0:02:16.334 ********* 2025-06-05 19:42:13.820543 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820554 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820566 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820645 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:42:13.820661 | orchestrator | 2025-06-05 19:42:13.820681 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.820710 | orchestrator | Thursday 05 June 2025 19:41:48 +0000 (0:00:02.989) 0:02:19.324 ********* 2025-06-05 19:42:13.820730 | orchestrator | 2025-06-05 19:42:13.820748 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.820759 | orchestrator | Thursday 05 June 2025 19:41:48 +0000 (0:00:00.059) 0:02:19.383 ********* 2025-06-05 19:42:13.820770 | orchestrator | 2025-06-05 19:42:13.820781 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-05 19:42:13.820792 | orchestrator | Thursday 05 June 2025 19:41:49 +0000 (0:00:00.060) 0:02:19.444 ********* 2025-06-05 19:42:13.820803 | orchestrator | 2025-06-05 19:42:13.820814 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-05 19:42:13.820825 | orchestrator | Thursday 05 June 2025 19:41:49 +0000 (0:00:00.069) 0:02:19.514 ********* 2025-06-05 19:42:13.820836 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.820847 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.820858 | orchestrator | 2025-06-05 19:42:13.820869 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-05 19:42:13.820879 | orchestrator | Thursday 05 June 2025 19:41:55 +0000 (0:00:06.081) 0:02:25.596 ********* 2025-06-05 19:42:13.820890 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.820901 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.820912 | orchestrator | 2025-06-05 19:42:13.820923 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-05 19:42:13.820934 | orchestrator | Thursday 05 June 2025 19:42:01 +0000 (0:00:06.350) 0:02:31.946 ********* 2025-06-05 19:42:13.820945 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:42:13.820956 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:42:13.820967 | orchestrator | 2025-06-05 19:42:13.820978 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-05 19:42:13.820989 | orchestrator | Thursday 05 June 2025 19:42:07 +0000 (0:00:06.241) 0:02:38.188 ********* 2025-06-05 19:42:13.821000 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:42:13.821010 | orchestrator | 2025-06-05 19:42:13.821021 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-05 19:42:13.821032 | orchestrator | Thursday 05 June 2025 19:42:07 +0000 (0:00:00.142) 0:02:38.330 ********* 2025-06-05 19:42:13.821043 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.821054 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.821065 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.821076 | orchestrator | 2025-06-05 19:42:13.821087 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-05 19:42:13.821098 | orchestrator | Thursday 05 June 2025 19:42:09 +0000 (0:00:01.104) 0:02:39.434 ********* 2025-06-05 19:42:13.821108 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.821120 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.821130 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.821141 | orchestrator | 2025-06-05 19:42:13.821152 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-05 19:42:13.821163 | orchestrator | Thursday 05 June 2025 19:42:09 +0000 (0:00:00.708) 0:02:40.143 ********* 2025-06-05 19:42:13.821174 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.821185 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.821196 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.821206 | orchestrator | 2025-06-05 19:42:13.821217 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-05 19:42:13.821276 | orchestrator | Thursday 05 June 2025 19:42:10 +0000 (0:00:00.791) 0:02:40.935 ********* 2025-06-05 19:42:13.821288 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:42:13.821298 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:42:13.821309 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:42:13.821320 | orchestrator | 2025-06-05 19:42:13.821331 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-05 19:42:13.821342 | orchestrator | Thursday 05 June 2025 19:42:11 +0000 (0:00:00.610) 0:02:41.545 ********* 2025-06-05 19:42:13.821361 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.821372 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.821382 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.821393 | orchestrator | 2025-06-05 19:42:13.821405 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-05 19:42:13.821416 | orchestrator | Thursday 05 June 2025 19:42:11 +0000 (0:00:00.853) 0:02:42.398 ********* 2025-06-05 19:42:13.821426 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:42:13.821437 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:42:13.821448 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:42:13.821459 | orchestrator | 2025-06-05 19:42:13.821470 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:42:13.821482 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-05 19:42:13.821493 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-05 19:42:13.821512 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-05 19:42:13.821524 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:42:13.821541 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:42:13.821553 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:42:13.821563 | orchestrator | 2025-06-05 19:42:13.821575 | orchestrator | 2025-06-05 19:42:13.821584 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:42:13.821594 | orchestrator | Thursday 05 June 2025 19:42:12 +0000 (0:00:00.784) 0:02:43.182 ********* 2025-06-05 19:42:13.821604 | orchestrator | =============================================================================== 2025-06-05 19:42:13.821614 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 43.10s 2025-06-05 19:42:13.821623 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 23.03s 2025-06-05 19:42:13.821633 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.11s 2025-06-05 19:42:13.821643 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.81s 2025-06-05 19:42:13.821653 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.55s 2025-06-05 19:42:13.821662 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.48s 2025-06-05 19:42:13.821672 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.21s 2025-06-05 19:42:13.821682 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.99s 2025-06-05 19:42:13.821692 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.86s 2025-06-05 19:42:13.821701 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.59s 2025-06-05 19:42:13.821711 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.16s 2025-06-05 19:42:13.821721 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.78s 2025-06-05 19:42:13.821730 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.59s 2025-06-05 19:42:13.821740 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.58s 2025-06-05 19:42:13.821750 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.57s 2025-06-05 19:42:13.821759 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.50s 2025-06-05 19:42:13.821769 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.40s 2025-06-05 19:42:13.821784 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.13s 2025-06-05 19:42:13.821794 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.12s 2025-06-05 19:42:13.821804 | orchestrator | ovn-db : Get OVN_Northbound cluster leader ------------------------------ 1.10s 2025-06-05 19:42:16.852641 | orchestrator | 2025-06-05 19:42:16 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:16.855027 | orchestrator | 2025-06-05 19:42:16 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:16.855390 | orchestrator | 2025-06-05 19:42:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:19.893785 | orchestrator | 2025-06-05 19:42:19 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:19.893874 | orchestrator | 2025-06-05 19:42:19 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:19.893888 | orchestrator | 2025-06-05 19:42:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:22.937671 | orchestrator | 2025-06-05 19:42:22 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:22.939403 | orchestrator | 2025-06-05 19:42:22 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:22.939431 | orchestrator | 2025-06-05 19:42:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:25.991289 | orchestrator | 2025-06-05 19:42:25 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:25.991812 | orchestrator | 2025-06-05 19:42:25 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:25.992166 | orchestrator | 2025-06-05 19:42:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:29.033759 | orchestrator | 2025-06-05 19:42:29 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:29.035615 | orchestrator | 2025-06-05 19:42:29 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:29.035670 | orchestrator | 2025-06-05 19:42:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:32.086416 | orchestrator | 2025-06-05 19:42:32 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:32.089603 | orchestrator | 2025-06-05 19:42:32 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:32.089969 | orchestrator | 2025-06-05 19:42:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:35.121704 | orchestrator | 2025-06-05 19:42:35 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:35.121892 | orchestrator | 2025-06-05 19:42:35 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:35.121914 | orchestrator | 2025-06-05 19:42:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:38.163694 | orchestrator | 2025-06-05 19:42:38 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:38.164145 | orchestrator | 2025-06-05 19:42:38 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:38.164692 | orchestrator | 2025-06-05 19:42:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:41.204816 | orchestrator | 2025-06-05 19:42:41 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:41.205149 | orchestrator | 2025-06-05 19:42:41 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:41.205177 | orchestrator | 2025-06-05 19:42:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:44.252803 | orchestrator | 2025-06-05 19:42:44 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:44.253319 | orchestrator | 2025-06-05 19:42:44 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:44.253354 | orchestrator | 2025-06-05 19:42:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:47.288833 | orchestrator | 2025-06-05 19:42:47 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:47.290433 | orchestrator | 2025-06-05 19:42:47 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:47.290470 | orchestrator | 2025-06-05 19:42:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:50.326117 | orchestrator | 2025-06-05 19:42:50 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:50.326250 | orchestrator | 2025-06-05 19:42:50 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:50.326268 | orchestrator | 2025-06-05 19:42:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:53.366329 | orchestrator | 2025-06-05 19:42:53 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:53.367581 | orchestrator | 2025-06-05 19:42:53 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:53.367773 | orchestrator | 2025-06-05 19:42:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:56.410542 | orchestrator | 2025-06-05 19:42:56 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:56.412181 | orchestrator | 2025-06-05 19:42:56 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:56.412706 | orchestrator | 2025-06-05 19:42:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:42:59.448241 | orchestrator | 2025-06-05 19:42:59 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:42:59.449944 | orchestrator | 2025-06-05 19:42:59 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:42:59.450217 | orchestrator | 2025-06-05 19:42:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:02.483577 | orchestrator | 2025-06-05 19:43:02 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:02.485381 | orchestrator | 2025-06-05 19:43:02 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:02.485987 | orchestrator | 2025-06-05 19:43:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:05.527527 | orchestrator | 2025-06-05 19:43:05 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state STARTED 2025-06-05 19:43:05.529553 | orchestrator | 2025-06-05 19:43:05 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:05.532673 | orchestrator | 2025-06-05 19:43:05 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:05.532747 | orchestrator | 2025-06-05 19:43:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:08.574416 | orchestrator | 2025-06-05 19:43:08 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state STARTED 2025-06-05 19:43:08.575411 | orchestrator | 2025-06-05 19:43:08 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:08.577071 | orchestrator | 2025-06-05 19:43:08 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:08.577345 | orchestrator | 2025-06-05 19:43:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:11.608509 | orchestrator | 2025-06-05 19:43:11 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state STARTED 2025-06-05 19:43:11.608673 | orchestrator | 2025-06-05 19:43:11 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:11.609110 | orchestrator | 2025-06-05 19:43:11 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:11.609136 | orchestrator | 2025-06-05 19:43:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:14.659557 | orchestrator | 2025-06-05 19:43:14 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state STARTED 2025-06-05 19:43:14.660399 | orchestrator | 2025-06-05 19:43:14 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:14.661991 | orchestrator | 2025-06-05 19:43:14 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:14.662078 | orchestrator | 2025-06-05 19:43:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:17.698961 | orchestrator | 2025-06-05 19:43:17 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state STARTED 2025-06-05 19:43:17.700504 | orchestrator | 2025-06-05 19:43:17 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:17.702606 | orchestrator | 2025-06-05 19:43:17 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:17.702743 | orchestrator | 2025-06-05 19:43:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:20.743082 | orchestrator | 2025-06-05 19:43:20 | INFO  | Task d66c9571-78a8-4ac9-b1ba-e745c238e650 is in state SUCCESS 2025-06-05 19:43:20.743314 | orchestrator | 2025-06-05 19:43:20 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:20.745152 | orchestrator | 2025-06-05 19:43:20 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:20.745521 | orchestrator | 2025-06-05 19:43:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:23.794738 | orchestrator | 2025-06-05 19:43:23 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:23.795918 | orchestrator | 2025-06-05 19:43:23 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:23.795952 | orchestrator | 2025-06-05 19:43:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:26.847434 | orchestrator | 2025-06-05 19:43:26 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:26.847676 | orchestrator | 2025-06-05 19:43:26 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:26.847700 | orchestrator | 2025-06-05 19:43:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:29.901968 | orchestrator | 2025-06-05 19:43:29 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:29.903428 | orchestrator | 2025-06-05 19:43:29 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:29.904192 | orchestrator | 2025-06-05 19:43:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:32.953408 | orchestrator | 2025-06-05 19:43:32 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:32.954308 | orchestrator | 2025-06-05 19:43:32 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:32.954349 | orchestrator | 2025-06-05 19:43:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:36.015591 | orchestrator | 2025-06-05 19:43:36 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:36.019105 | orchestrator | 2025-06-05 19:43:36 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:36.019223 | orchestrator | 2025-06-05 19:43:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:39.057234 | orchestrator | 2025-06-05 19:43:39 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:39.058392 | orchestrator | 2025-06-05 19:43:39 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:39.058460 | orchestrator | 2025-06-05 19:43:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:42.099745 | orchestrator | 2025-06-05 19:43:42 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:42.099832 | orchestrator | 2025-06-05 19:43:42 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:42.099843 | orchestrator | 2025-06-05 19:43:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:45.134488 | orchestrator | 2025-06-05 19:43:45 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:45.134710 | orchestrator | 2025-06-05 19:43:45 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:45.134732 | orchestrator | 2025-06-05 19:43:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:48.181892 | orchestrator | 2025-06-05 19:43:48 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:48.183344 | orchestrator | 2025-06-05 19:43:48 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:48.183397 | orchestrator | 2025-06-05 19:43:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:51.232183 | orchestrator | 2025-06-05 19:43:51 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:51.238510 | orchestrator | 2025-06-05 19:43:51 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:51.238579 | orchestrator | 2025-06-05 19:43:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:54.280256 | orchestrator | 2025-06-05 19:43:54 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:54.280366 | orchestrator | 2025-06-05 19:43:54 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:54.280382 | orchestrator | 2025-06-05 19:43:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:43:57.327302 | orchestrator | 2025-06-05 19:43:57 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:43:57.329313 | orchestrator | 2025-06-05 19:43:57 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:43:57.329370 | orchestrator | 2025-06-05 19:43:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:00.387582 | orchestrator | 2025-06-05 19:44:00 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:00.390604 | orchestrator | 2025-06-05 19:44:00 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:00.390671 | orchestrator | 2025-06-05 19:44:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:03.432929 | orchestrator | 2025-06-05 19:44:03 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:03.433869 | orchestrator | 2025-06-05 19:44:03 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:03.434064 | orchestrator | 2025-06-05 19:44:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:06.481044 | orchestrator | 2025-06-05 19:44:06 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:06.482892 | orchestrator | 2025-06-05 19:44:06 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:06.482943 | orchestrator | 2025-06-05 19:44:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:09.527706 | orchestrator | 2025-06-05 19:44:09 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:09.529778 | orchestrator | 2025-06-05 19:44:09 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:09.529823 | orchestrator | 2025-06-05 19:44:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:12.575612 | orchestrator | 2025-06-05 19:44:12 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:12.576079 | orchestrator | 2025-06-05 19:44:12 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:12.576605 | orchestrator | 2025-06-05 19:44:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:15.619610 | orchestrator | 2025-06-05 19:44:15 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state STARTED 2025-06-05 19:44:15.619713 | orchestrator | 2025-06-05 19:44:15 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:15.619728 | orchestrator | 2025-06-05 19:44:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:18.661237 | orchestrator | 2025-06-05 19:44:18 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:18.673688 | orchestrator | 2025-06-05 19:44:18.673762 | orchestrator | None 2025-06-05 19:44:18.673776 | orchestrator | 2025-06-05 19:44:18 | INFO  | Task b8ab1e4f-25ad-4b5f-8955-f3f6efcbf0eb is in state SUCCESS 2025-06-05 19:44:18.675642 | orchestrator | 2025-06-05 19:44:18.675697 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:44:18.675718 | orchestrator | 2025-06-05 19:44:18.675738 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:44:18.675757 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-06-05 19:44:18.675874 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.675899 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.675921 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.675940 | orchestrator | 2025-06-05 19:44:18.675955 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:44:18.675972 | orchestrator | Thursday 05 June 2025 19:38:20 +0000 (0:00:00.314) 0:00:00.582 ********* 2025-06-05 19:44:18.675992 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-05 19:44:18.676010 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-05 19:44:18.676164 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-05 19:44:18.676193 | orchestrator | 2025-06-05 19:44:18.676216 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-05 19:44:18.676261 | orchestrator | 2025-06-05 19:44:18.676284 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-05 19:44:18.676356 | orchestrator | Thursday 05 June 2025 19:38:21 +0000 (0:00:00.509) 0:00:01.092 ********* 2025-06-05 19:44:18.676378 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.676399 | orchestrator | 2025-06-05 19:44:18.676419 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-05 19:44:18.676439 | orchestrator | Thursday 05 June 2025 19:38:21 +0000 (0:00:00.742) 0:00:01.834 ********* 2025-06-05 19:44:18.676460 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.676501 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.676513 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.676523 | orchestrator | 2025-06-05 19:44:18.676535 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-05 19:44:18.676546 | orchestrator | Thursday 05 June 2025 19:38:22 +0000 (0:00:00.674) 0:00:02.508 ********* 2025-06-05 19:44:18.676557 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.676568 | orchestrator | 2025-06-05 19:44:18.676579 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-05 19:44:18.676589 | orchestrator | Thursday 05 June 2025 19:38:23 +0000 (0:00:00.670) 0:00:03.179 ********* 2025-06-05 19:44:18.676600 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.676611 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.676622 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.676634 | orchestrator | 2025-06-05 19:44:18.676653 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-05 19:44:18.676725 | orchestrator | Thursday 05 June 2025 19:38:23 +0000 (0:00:00.581) 0:00:03.761 ********* 2025-06-05 19:44:18.676745 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676764 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676800 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676818 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676837 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-05 19:44:18.676855 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-05 19:44:18.677048 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-05 19:44:18.677097 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-05 19:44:18.677121 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-05 19:44:18.677140 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-05 19:44:18.677157 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-05 19:44:18.677168 | orchestrator | 2025-06-05 19:44:18.677179 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-05 19:44:18.677190 | orchestrator | Thursday 05 June 2025 19:38:26 +0000 (0:00:02.978) 0:00:06.739 ********* 2025-06-05 19:44:18.677201 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-05 19:44:18.677212 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-05 19:44:18.677223 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-05 19:44:18.677234 | orchestrator | 2025-06-05 19:44:18.677245 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-05 19:44:18.677256 | orchestrator | Thursday 05 June 2025 19:38:27 +0000 (0:00:00.891) 0:00:07.631 ********* 2025-06-05 19:44:18.677267 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-05 19:44:18.677278 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-05 19:44:18.677289 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-05 19:44:18.677300 | orchestrator | 2025-06-05 19:44:18.677311 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-05 19:44:18.677337 | orchestrator | Thursday 05 June 2025 19:38:29 +0000 (0:00:01.587) 0:00:09.218 ********* 2025-06-05 19:44:18.677348 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-05 19:44:18.677363 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.677452 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-05 19:44:18.677494 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.677515 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-05 19:44:18.677651 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.677762 | orchestrator | 2025-06-05 19:44:18.677787 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-05 19:44:18.677807 | orchestrator | Thursday 05 June 2025 19:38:30 +0000 (0:00:00.683) 0:00:09.901 ********* 2025-06-05 19:44:18.677832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.677973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.677985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.677996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.678008 | orchestrator | 2025-06-05 19:44:18.678119 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-05 19:44:18.678136 | orchestrator | Thursday 05 June 2025 19:38:32 +0000 (0:00:02.045) 0:00:11.946 ********* 2025-06-05 19:44:18.678148 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.678159 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.678170 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.678181 | orchestrator | 2025-06-05 19:44:18.678192 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-05 19:44:18.678202 | orchestrator | Thursday 05 June 2025 19:38:34 +0000 (0:00:02.235) 0:00:14.182 ********* 2025-06-05 19:44:18.678213 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-05 19:44:18.678224 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-05 19:44:18.678235 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-05 19:44:18.678249 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-05 19:44:18.678268 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-05 19:44:18.678311 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-05 19:44:18.678330 | orchestrator | 2025-06-05 19:44:18.678348 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-05 19:44:18.678366 | orchestrator | Thursday 05 June 2025 19:38:36 +0000 (0:00:02.171) 0:00:16.354 ********* 2025-06-05 19:44:18.678551 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.678581 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.678603 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.678624 | orchestrator | 2025-06-05 19:44:18.678645 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-05 19:44:18.678665 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:02.975) 0:00:19.329 ********* 2025-06-05 19:44:18.678679 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.678702 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.678713 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.678724 | orchestrator | 2025-06-05 19:44:18.678735 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-05 19:44:18.678746 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:01.825) 0:00:21.155 ********* 2025-06-05 19:44:18.678758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.678791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.678804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.678817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.678829 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.678840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.678852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.678871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.678893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.678905 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.678917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.678928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.678939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.678951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.678969 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.678980 | orchestrator | 2025-06-05 19:44:18.678991 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-05 19:44:18.679010 | orchestrator | Thursday 05 June 2025 19:38:42 +0000 (0:00:00.793) 0:00:21.948 ********* 2025-06-05 19:44:18.679029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679153 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.679251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.679430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.679475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.679499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.679532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c', '__omit_place_holder__e07d4549103367b5e477a9c18a4b06423495962c'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-05 19:44:18.679559 | orchestrator | 2025-06-05 19:44:18.679571 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-05 19:44:18.679582 | orchestrator | Thursday 05 June 2025 19:38:45 +0000 (0:00:03.893) 0:00:25.842 ********* 2025-06-05 19:44:18.679594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.679896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.679917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.679938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.679957 | orchestrator | 2025-06-05 19:44:18.679975 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-05 19:44:18.680114 | orchestrator | Thursday 05 June 2025 19:38:49 +0000 (0:00:03.558) 0:00:29.400 ********* 2025-06-05 19:44:18.680144 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-05 19:44:18.680178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-05 19:44:18.680201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-05 19:44:18.680220 | orchestrator | 2025-06-05 19:44:18.680236 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-05 19:44:18.680247 | orchestrator | Thursday 05 June 2025 19:38:51 +0000 (0:00:01.577) 0:00:30.977 ********* 2025-06-05 19:44:18.680258 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-05 19:44:18.680269 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-05 19:44:18.680280 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-05 19:44:18.680291 | orchestrator | 2025-06-05 19:44:18.680302 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-05 19:44:18.680313 | orchestrator | Thursday 05 June 2025 19:38:54 +0000 (0:00:03.763) 0:00:34.741 ********* 2025-06-05 19:44:18.680324 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.680335 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.680346 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.680357 | orchestrator | 2025-06-05 19:44:18.680368 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-05 19:44:18.680391 | orchestrator | Thursday 05 June 2025 19:38:56 +0000 (0:00:01.292) 0:00:36.033 ********* 2025-06-05 19:44:18.680402 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-05 19:44:18.680415 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-05 19:44:18.680426 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-05 19:44:18.680437 | orchestrator | 2025-06-05 19:44:18.680448 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-05 19:44:18.680459 | orchestrator | Thursday 05 June 2025 19:38:58 +0000 (0:00:02.300) 0:00:38.334 ********* 2025-06-05 19:44:18.680470 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-05 19:44:18.680481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-05 19:44:18.680492 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-05 19:44:18.680503 | orchestrator | 2025-06-05 19:44:18.680515 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-05 19:44:18.680533 | orchestrator | Thursday 05 June 2025 19:39:00 +0000 (0:00:01.803) 0:00:40.138 ********* 2025-06-05 19:44:18.680552 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-05 19:44:18.680571 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-05 19:44:18.680590 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-05 19:44:18.680609 | orchestrator | 2025-06-05 19:44:18.680628 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-05 19:44:18.680645 | orchestrator | Thursday 05 June 2025 19:39:01 +0000 (0:00:01.607) 0:00:41.745 ********* 2025-06-05 19:44:18.680730 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-05 19:44:18.680743 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-05 19:44:18.680754 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-05 19:44:18.680765 | orchestrator | 2025-06-05 19:44:18.680776 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-05 19:44:18.680786 | orchestrator | Thursday 05 June 2025 19:39:03 +0000 (0:00:01.682) 0:00:43.428 ********* 2025-06-05 19:44:18.680797 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.680808 | orchestrator | 2025-06-05 19:44:18.680819 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-05 19:44:18.680830 | orchestrator | Thursday 05 June 2025 19:39:04 +0000 (0:00:01.005) 0:00:44.433 ********* 2025-06-05 19:44:18.680841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.680977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.680993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.681019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.681031 | orchestrator | 2025-06-05 19:44:18.681043 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-05 19:44:18.681054 | orchestrator | Thursday 05 June 2025 19:39:07 +0000 (0:00:03.356) 0:00:47.791 ********* 2025-06-05 19:44:18.681065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681273 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.681284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681338 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.681348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681379 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.681445 | orchestrator | 2025-06-05 19:44:18.681456 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-05 19:44:18.681466 | orchestrator | Thursday 05 June 2025 19:39:08 +0000 (0:00:00.724) 0:00:48.515 ********* 2025-06-05 19:44:18.681476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681553 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.681564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681595 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.681605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681643 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.681653 | orchestrator | 2025-06-05 19:44:18.681663 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-05 19:44:18.681673 | orchestrator | Thursday 05 June 2025 19:39:09 +0000 (0:00:01.306) 0:00:49.821 ********* 2025-06-05 19:44:18.681695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681727 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.681737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681779 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.681799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681831 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.681841 | orchestrator | 2025-06-05 19:44:18.681851 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-05 19:44:18.681862 | orchestrator | Thursday 05 June 2025 19:39:10 +0000 (0:00:00.517) 0:00:50.339 ********* 2025-06-05 19:44:18.681916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.681954 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.681969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.681988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.681999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682009 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.682207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682252 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.682261 | orchestrator | 2025-06-05 19:44:18.682271 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-05 19:44:18.682281 | orchestrator | Thursday 05 June 2025 19:39:11 +0000 (0:00:00.818) 0:00:51.157 ********* 2025-06-05 19:44:18.682292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682338 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.682348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682386 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.682396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682438 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.682448 | orchestrator | 2025-06-05 19:44:18.682458 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-05 19:44:18.682468 | orchestrator | Thursday 05 June 2025 19:39:12 +0000 (0:00:01.363) 0:00:52.521 ********* 2025-06-05 19:44:18.682478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682515 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.682525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682613 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.682622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682653 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.682662 | orchestrator | 2025-06-05 19:44:18.682670 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-05 19:44:18.682678 | orchestrator | Thursday 05 June 2025 19:39:13 +0000 (0:00:00.908) 0:00:53.430 ********* 2025-06-05 19:44:18.682686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682721 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.682764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682797 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.682806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682831 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.682839 | orchestrator | 2025-06-05 19:44:18.682851 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-05 19:44:18.682864 | orchestrator | Thursday 05 June 2025 19:39:14 +0000 (0:00:00.782) 0:00:54.212 ********* 2025-06-05 19:44:18.682873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.682882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.682955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.682971 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.682985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.683000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.683015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.683029 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.683114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-05 19:44:18.683169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-05 19:44:18.683197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-05 19:44:18.683276 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.683295 | orchestrator | 2025-06-05 19:44:18.683310 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-05 19:44:18.683324 | orchestrator | Thursday 05 June 2025 19:39:15 +0000 (0:00:01.237) 0:00:55.449 ********* 2025-06-05 19:44:18.683337 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-05 19:44:18.683349 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-05 19:44:18.683357 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-05 19:44:18.683365 | orchestrator | 2025-06-05 19:44:18.683373 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-05 19:44:18.683381 | orchestrator | Thursday 05 June 2025 19:39:17 +0000 (0:00:01.639) 0:00:57.089 ********* 2025-06-05 19:44:18.683389 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-05 19:44:18.683397 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-05 19:44:18.683405 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-05 19:44:18.683413 | orchestrator | 2025-06-05 19:44:18.683420 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-05 19:44:18.683428 | orchestrator | Thursday 05 June 2025 19:39:19 +0000 (0:00:01.916) 0:00:59.006 ********* 2025-06-05 19:44:18.683436 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 19:44:18.683444 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 19:44:18.683452 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 19:44:18.683461 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.683469 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 19:44:18.683477 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 19:44:18.683484 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.683493 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 19:44:18.683501 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.683509 | orchestrator | 2025-06-05 19:44:18.683516 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-05 19:44:18.683524 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:01.037) 0:01:00.044 ********* 2025-06-05 19:44:18.683545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-05 19:44:18.683610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.683693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.683716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-05 19:44:18.683730 | orchestrator | 2025-06-05 19:44:18.683746 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-05 19:44:18.683759 | orchestrator | Thursday 05 June 2025 19:39:22 +0000 (0:00:02.650) 0:01:02.694 ********* 2025-06-05 19:44:18.683768 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.683776 | orchestrator | 2025-06-05 19:44:18.683786 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-05 19:44:18.683799 | orchestrator | Thursday 05 June 2025 19:39:23 +0000 (0:00:00.617) 0:01:03.311 ********* 2025-06-05 19:44:18.683862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-05 19:44:18.683879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.683894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.683917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.683978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-05 19:44:18.683989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.684004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-05 19:44:18.684056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.684106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684129 | orchestrator | 2025-06-05 19:44:18.684139 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-05 19:44:18.684153 | orchestrator | Thursday 05 June 2025 19:39:26 +0000 (0:00:02.999) 0:01:06.310 ********* 2025-06-05 19:44:18.684201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-05 19:44:18.684219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.684234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-05 19:44:18.684270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.684288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684297 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.684305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-05 19:44:18.684327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684336 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.684344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.684356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.684374 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.684382 | orchestrator | 2025-06-05 19:44:18.684390 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-05 19:44:18.684401 | orchestrator | Thursday 05 June 2025 19:39:27 +0000 (0:00:00.789) 0:01:07.100 ********* 2025-06-05 19:44:18.684428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684489 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.684505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684534 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.684548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-05 19:44:18.684668 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.684684 | orchestrator | 2025-06-05 19:44:18.684698 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-05 19:44:18.684709 | orchestrator | Thursday 05 June 2025 19:39:28 +0000 (0:00:01.026) 0:01:08.127 ********* 2025-06-05 19:44:18.684717 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.684725 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.684733 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.684741 | orchestrator | 2025-06-05 19:44:18.684749 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-05 19:44:18.684757 | orchestrator | Thursday 05 June 2025 19:39:29 +0000 (0:00:01.355) 0:01:09.483 ********* 2025-06-05 19:44:18.684787 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.684801 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.684814 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.684827 | orchestrator | 2025-06-05 19:44:18.684906 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-05 19:44:18.684925 | orchestrator | Thursday 05 June 2025 19:39:31 +0000 (0:00:02.183) 0:01:11.666 ********* 2025-06-05 19:44:18.684940 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.684954 | orchestrator | 2025-06-05 19:44:18.684967 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-05 19:44:18.684979 | orchestrator | Thursday 05 June 2025 19:39:32 +0000 (0:00:00.578) 0:01:12.244 ********* 2025-06-05 19:44:18.685025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.685038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.685177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.685268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685345 | orchestrator | 2025-06-05 19:44:18.685393 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-05 19:44:18.685406 | orchestrator | Thursday 05 June 2025 19:39:36 +0000 (0:00:04.124) 0:01:16.369 ********* 2025-06-05 19:44:18.685420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.685433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685539 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.685554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.685576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685595 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.685602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.685618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.685633 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.685640 | orchestrator | 2025-06-05 19:44:18.685647 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-05 19:44:18.685654 | orchestrator | Thursday 05 June 2025 19:39:37 +0000 (0:00:00.504) 0:01:16.873 ********* 2025-06-05 19:44:18.685668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685682 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.685689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685703 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.685710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-05 19:44:18.685724 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.685731 | orchestrator | 2025-06-05 19:44:18.685738 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-05 19:44:18.685745 | orchestrator | Thursday 05 June 2025 19:39:37 +0000 (0:00:00.974) 0:01:17.848 ********* 2025-06-05 19:44:18.685751 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.685758 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.685765 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.685772 | orchestrator | 2025-06-05 19:44:18.685778 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-05 19:44:18.685785 | orchestrator | Thursday 05 June 2025 19:39:39 +0000 (0:00:01.507) 0:01:19.355 ********* 2025-06-05 19:44:18.685792 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.685799 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.685805 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.685812 | orchestrator | 2025-06-05 19:44:18.685819 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-05 19:44:18.685825 | orchestrator | Thursday 05 June 2025 19:39:41 +0000 (0:00:02.038) 0:01:21.394 ********* 2025-06-05 19:44:18.685832 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.685839 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.685845 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.685852 | orchestrator | 2025-06-05 19:44:18.685858 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-05 19:44:18.685865 | orchestrator | Thursday 05 June 2025 19:39:41 +0000 (0:00:00.290) 0:01:21.684 ********* 2025-06-05 19:44:18.685872 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.685878 | orchestrator | 2025-06-05 19:44:18.685885 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-05 19:44:18.685892 | orchestrator | Thursday 05 June 2025 19:39:42 +0000 (0:00:00.711) 0:01:22.396 ********* 2025-06-05 19:44:18.685915 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-05 19:44:18.685958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-05 19:44:18.685972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-05 19:44:18.685980 | orchestrator | 2025-06-05 19:44:18.685987 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-05 19:44:18.685994 | orchestrator | Thursday 05 June 2025 19:39:48 +0000 (0:00:05.728) 0:01:28.125 ********* 2025-06-05 19:44:18.686001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-05 19:44:18.686008 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.687973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-05 19:44:18.688045 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-05 19:44:18.688164 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688171 | orchestrator | 2025-06-05 19:44:18.688178 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-05 19:44:18.688185 | orchestrator | Thursday 05 June 2025 19:39:49 +0000 (0:00:01.428) 0:01:29.553 ********* 2025-06-05 19:44:18.688194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688226 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-05 19:44:18.688251 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688258 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688264 | orchestrator | 2025-06-05 19:44:18.688270 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-05 19:44:18.688276 | orchestrator | Thursday 05 June 2025 19:39:51 +0000 (0:00:01.592) 0:01:31.146 ********* 2025-06-05 19:44:18.688283 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688289 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688295 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688301 | orchestrator | 2025-06-05 19:44:18.688307 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-05 19:44:18.688314 | orchestrator | Thursday 05 June 2025 19:39:51 +0000 (0:00:00.654) 0:01:31.800 ********* 2025-06-05 19:44:18.688320 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688326 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688335 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688341 | orchestrator | 2025-06-05 19:44:18.688347 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-05 19:44:18.688359 | orchestrator | Thursday 05 June 2025 19:39:53 +0000 (0:00:01.152) 0:01:32.953 ********* 2025-06-05 19:44:18.688366 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.688372 | orchestrator | 2025-06-05 19:44:18.688379 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-05 19:44:18.688385 | orchestrator | Thursday 05 June 2025 19:39:53 +0000 (0:00:00.690) 0:01:33.644 ********* 2025-06-05 19:44:18.688391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.688399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.688406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.688472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688499 | orchestrator | 2025-06-05 19:44:18.688506 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-05 19:44:18.688512 | orchestrator | Thursday 05 June 2025 19:39:57 +0000 (0:00:03.752) 0:01:37.396 ********* 2025-06-05 19:44:18.688519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.688529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688556 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.688569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688593 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.688616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688643 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688650 | orchestrator | 2025-06-05 19:44:18.688657 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-05 19:44:18.688665 | orchestrator | Thursday 05 June 2025 19:39:58 +0000 (0:00:01.308) 0:01:38.705 ********* 2025-06-05 19:44:18.688672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688688 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688710 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-05 19:44:18.688738 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688745 | orchestrator | 2025-06-05 19:44:18.688753 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-05 19:44:18.688760 | orchestrator | Thursday 05 June 2025 19:40:00 +0000 (0:00:01.349) 0:01:40.054 ********* 2025-06-05 19:44:18.688767 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.688774 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.688781 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.688788 | orchestrator | 2025-06-05 19:44:18.688795 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-05 19:44:18.688803 | orchestrator | Thursday 05 June 2025 19:40:01 +0000 (0:00:01.767) 0:01:41.822 ********* 2025-06-05 19:44:18.688810 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.688817 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.688824 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.688831 | orchestrator | 2025-06-05 19:44:18.688838 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-05 19:44:18.688846 | orchestrator | Thursday 05 June 2025 19:40:04 +0000 (0:00:02.307) 0:01:44.129 ********* 2025-06-05 19:44:18.688859 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688866 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688873 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688880 | orchestrator | 2025-06-05 19:44:18.688887 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-05 19:44:18.688894 | orchestrator | Thursday 05 June 2025 19:40:04 +0000 (0:00:00.406) 0:01:44.535 ********* 2025-06-05 19:44:18.688901 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.688908 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.688916 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.688922 | orchestrator | 2025-06-05 19:44:18.688929 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-05 19:44:18.688936 | orchestrator | Thursday 05 June 2025 19:40:04 +0000 (0:00:00.266) 0:01:44.802 ********* 2025-06-05 19:44:18.688944 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.688951 | orchestrator | 2025-06-05 19:44:18.688958 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-05 19:44:18.688964 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.867) 0:01:45.670 ********* 2025-06-05 19:44:18.688971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:44:18.688977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.688984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.688999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:44:18.689037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.689051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:44:18.689122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.689133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689166 | orchestrator | 2025-06-05 19:44:18.689172 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-05 19:44:18.689178 | orchestrator | Thursday 05 June 2025 19:40:11 +0000 (0:00:05.687) 0:01:51.358 ********* 2025-06-05 19:44:18.689191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:44:18.689202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:44:18.689209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.689216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.689222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689285 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.689292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:44:18.689316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689322 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.689329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:44:18.689336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.689380 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.689386 | orchestrator | 2025-06-05 19:44:18.689393 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-05 19:44:18.689399 | orchestrator | Thursday 05 June 2025 19:40:12 +0000 (0:00:00.810) 0:01:52.168 ********* 2025-06-05 19:44:18.689406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689425 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.689431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689438 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.689444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-05 19:44:18.689457 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.689463 | orchestrator | 2025-06-05 19:44:18.689469 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-05 19:44:18.689476 | orchestrator | Thursday 05 June 2025 19:40:13 +0000 (0:00:01.064) 0:01:53.233 ********* 2025-06-05 19:44:18.689482 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.689488 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.689494 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.689506 | orchestrator | 2025-06-05 19:44:18.689513 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-05 19:44:18.689519 | orchestrator | Thursday 05 June 2025 19:40:15 +0000 (0:00:01.823) 0:01:55.056 ********* 2025-06-05 19:44:18.689525 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.689531 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.689538 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.689544 | orchestrator | 2025-06-05 19:44:18.689550 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-05 19:44:18.689556 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:01.819) 0:01:56.876 ********* 2025-06-05 19:44:18.689563 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.689569 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.689575 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.689581 | orchestrator | 2025-06-05 19:44:18.689588 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-05 19:44:18.689594 | orchestrator | Thursday 05 June 2025 19:40:17 +0000 (0:00:00.294) 0:01:57.171 ********* 2025-06-05 19:44:18.689600 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.689606 | orchestrator | 2025-06-05 19:44:18.689612 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-05 19:44:18.689619 | orchestrator | Thursday 05 June 2025 19:40:18 +0000 (0:00:00.828) 0:01:57.999 ********* 2025-06-05 19:44:18.689634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:44:18.689643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:44:18.689660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.689668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.690250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:44:18.690333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.690369 | orchestrator | 2025-06-05 19:44:18.690383 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-05 19:44:18.690395 | orchestrator | Thursday 05 June 2025 19:40:22 +0000 (0:00:04.642) 0:02:02.642 ********* 2025-06-05 19:44:18.690437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:44:18.690453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.690474 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.690487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:44:18.690516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.690529 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.690542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:44:18.690577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.690591 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.690603 | orchestrator | 2025-06-05 19:44:18.690614 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-05 19:44:18.690626 | orchestrator | Thursday 05 June 2025 19:40:25 +0000 (0:00:02.608) 0:02:05.250 ********* 2025-06-05 19:44:18.690639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690695 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.690707 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.690719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-05 19:44:18.690754 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.690765 | orchestrator | 2025-06-05 19:44:18.690776 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-05 19:44:18.690788 | orchestrator | Thursday 05 June 2025 19:40:28 +0000 (0:00:03.215) 0:02:08.465 ********* 2025-06-05 19:44:18.690799 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.690810 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.690821 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.690832 | orchestrator | 2025-06-05 19:44:18.690843 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-05 19:44:18.690854 | orchestrator | Thursday 05 June 2025 19:40:30 +0000 (0:00:01.756) 0:02:10.222 ********* 2025-06-05 19:44:18.690865 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.690877 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.690894 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.690905 | orchestrator | 2025-06-05 19:44:18.690916 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-05 19:44:18.690928 | orchestrator | Thursday 05 June 2025 19:40:32 +0000 (0:00:02.023) 0:02:12.246 ********* 2025-06-05 19:44:18.690939 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.690950 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.690961 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.690972 | orchestrator | 2025-06-05 19:44:18.690983 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-05 19:44:18.690994 | orchestrator | Thursday 05 June 2025 19:40:32 +0000 (0:00:00.331) 0:02:12.577 ********* 2025-06-05 19:44:18.691005 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.691016 | orchestrator | 2025-06-05 19:44:18.691027 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-05 19:44:18.691038 | orchestrator | Thursday 05 June 2025 19:40:33 +0000 (0:00:01.140) 0:02:13.717 ********* 2025-06-05 19:44:18.691051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:44:18.691063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:44:18.691140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:44:18.691153 | orchestrator | 2025-06-05 19:44:18.691165 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-05 19:44:18.691176 | orchestrator | Thursday 05 June 2025 19:40:37 +0000 (0:00:03.891) 0:02:17.609 ********* 2025-06-05 19:44:18.691200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:44:18.691221 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.691259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:44:18.691272 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.691283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:44:18.691294 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.691305 | orchestrator | 2025-06-05 19:44:18.691316 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-05 19:44:18.691327 | orchestrator | Thursday 05 June 2025 19:40:38 +0000 (0:00:00.374) 0:02:17.984 ********* 2025-06-05 19:44:18.691339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691364 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.691375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691398 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.691409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-05 19:44:18.691431 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.691442 | orchestrator | 2025-06-05 19:44:18.691453 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-05 19:44:18.691464 | orchestrator | Thursday 05 June 2025 19:40:38 +0000 (0:00:00.611) 0:02:18.596 ********* 2025-06-05 19:44:18.691475 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.691486 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.691496 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.691507 | orchestrator | 2025-06-05 19:44:18.691526 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-05 19:44:18.691537 | orchestrator | Thursday 05 June 2025 19:40:40 +0000 (0:00:01.670) 0:02:20.266 ********* 2025-06-05 19:44:18.691548 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.691564 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.691575 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.691586 | orchestrator | 2025-06-05 19:44:18.691603 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-05 19:44:18.691615 | orchestrator | Thursday 05 June 2025 19:40:42 +0000 (0:00:02.107) 0:02:22.374 ********* 2025-06-05 19:44:18.691626 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.691637 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.691648 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.691658 | orchestrator | 2025-06-05 19:44:18.691669 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-05 19:44:18.691680 | orchestrator | Thursday 05 June 2025 19:40:42 +0000 (0:00:00.336) 0:02:22.710 ********* 2025-06-05 19:44:18.691691 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.691702 | orchestrator | 2025-06-05 19:44:18.691713 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-05 19:44:18.691724 | orchestrator | Thursday 05 June 2025 19:40:43 +0000 (0:00:00.867) 0:02:23.578 ********* 2025-06-05 19:44:18.691737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:44:18.691769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:44:18.691789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:44:18.691802 | orchestrator | 2025-06-05 19:44:18.691813 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-05 19:44:18.691831 | orchestrator | Thursday 05 June 2025 19:40:47 +0000 (0:00:03.584) 0:02:27.162 ********* 2025-06-05 19:44:18.691858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:44:18.691871 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.691883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:44:18.691901 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.691926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:44:18.691938 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.691950 | orchestrator | 2025-06-05 19:44:18.691961 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-05 19:44:18.691972 | orchestrator | Thursday 05 June 2025 19:40:47 +0000 (0:00:00.672) 0:02:27.834 ********* 2025-06-05 19:44:18.691984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.691998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.692030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-05 19:44:18.692053 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.692064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.692097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.692128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.692151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-05 19:44:18.692162 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.692173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-05 19:44:18.692197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-05 19:44:18.692208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-05 19:44:18.692227 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.692238 | orchestrator | 2025-06-05 19:44:18.692249 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-05 19:44:18.692260 | orchestrator | Thursday 05 June 2025 19:40:48 +0000 (0:00:00.910) 0:02:28.745 ********* 2025-06-05 19:44:18.692271 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.692282 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.692293 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.692304 | orchestrator | 2025-06-05 19:44:18.692315 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-05 19:44:18.692326 | orchestrator | Thursday 05 June 2025 19:40:50 +0000 (0:00:01.607) 0:02:30.353 ********* 2025-06-05 19:44:18.692337 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.692348 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.692359 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.692369 | orchestrator | 2025-06-05 19:44:18.692380 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-05 19:44:18.692391 | orchestrator | Thursday 05 June 2025 19:40:52 +0000 (0:00:01.984) 0:02:32.337 ********* 2025-06-05 19:44:18.692402 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.692413 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.692424 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.692435 | orchestrator | 2025-06-05 19:44:18.692446 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-05 19:44:18.692457 | orchestrator | Thursday 05 June 2025 19:40:52 +0000 (0:00:00.308) 0:02:32.646 ********* 2025-06-05 19:44:18.692468 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.692479 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.692490 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.692500 | orchestrator | 2025-06-05 19:44:18.692511 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-05 19:44:18.692522 | orchestrator | Thursday 05 June 2025 19:40:53 +0000 (0:00:00.301) 0:02:32.947 ********* 2025-06-05 19:44:18.692533 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.692544 | orchestrator | 2025-06-05 19:44:18.692555 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-05 19:44:18.692566 | orchestrator | Thursday 05 June 2025 19:40:54 +0000 (0:00:01.102) 0:02:34.050 ********* 2025-06-05 19:44:18.692591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:44:18.692605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:44:18.692650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:44:18.692705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692728 | orchestrator | 2025-06-05 19:44:18.692740 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-05 19:44:18.692751 | orchestrator | Thursday 05 June 2025 19:40:57 +0000 (0:00:03.627) 0:02:37.677 ********* 2025-06-05 19:44:18.692763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:44:18.692785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692809 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.692827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:44:18.692839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692863 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.692889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:44:18.692902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:44:18.692920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:44:18.692932 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.692943 | orchestrator | 2025-06-05 19:44:18.692954 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-05 19:44:18.692965 | orchestrator | Thursday 05 June 2025 19:40:58 +0000 (0:00:00.539) 0:02:38.217 ********* 2025-06-05 19:44:18.692976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.692988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.692999 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.693011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.693022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.693034 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.693045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.693056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-05 19:44:18.693067 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.693098 | orchestrator | 2025-06-05 19:44:18.693110 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-05 19:44:18.693121 | orchestrator | Thursday 05 June 2025 19:40:59 +0000 (0:00:00.993) 0:02:39.211 ********* 2025-06-05 19:44:18.693132 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.693143 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.693154 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.693178 | orchestrator | 2025-06-05 19:44:18.693189 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-05 19:44:18.693211 | orchestrator | Thursday 05 June 2025 19:41:00 +0000 (0:00:01.407) 0:02:40.619 ********* 2025-06-05 19:44:18.693222 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.693233 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.693244 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.693255 | orchestrator | 2025-06-05 19:44:18.693271 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-05 19:44:18.693293 | orchestrator | Thursday 05 June 2025 19:41:02 +0000 (0:00:02.074) 0:02:42.693 ********* 2025-06-05 19:44:18.693311 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.693323 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.693334 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.693345 | orchestrator | 2025-06-05 19:44:18.693356 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-05 19:44:18.693366 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:00.335) 0:02:43.029 ********* 2025-06-05 19:44:18.693377 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.693388 | orchestrator | 2025-06-05 19:44:18.693399 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-05 19:44:18.693410 | orchestrator | Thursday 05 June 2025 19:41:04 +0000 (0:00:01.189) 0:02:44.219 ********* 2025-06-05 19:44:18.693422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:44:18.693435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:44:18.693459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:44:18.693501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693512 | orchestrator | 2025-06-05 19:44:18.693523 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-05 19:44:18.693534 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:03.979) 0:02:48.198 ********* 2025-06-05 19:44:18.693546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:44:18.693558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693569 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.693792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:44:18.693822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693834 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.693846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:44:18.693858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.693870 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.693881 | orchestrator | 2025-06-05 19:44:18.693893 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-05 19:44:18.693904 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:00.603) 0:02:48.801 ********* 2025-06-05 19:44:18.693915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.693927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.693945 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.693956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.693968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.693979 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.693990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.694006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-05 19:44:18.694203 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.694226 | orchestrator | 2025-06-05 19:44:18.694237 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-05 19:44:18.694249 | orchestrator | Thursday 05 June 2025 19:41:10 +0000 (0:00:01.118) 0:02:49.920 ********* 2025-06-05 19:44:18.694260 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.694271 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.694282 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.694293 | orchestrator | 2025-06-05 19:44:18.694304 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-05 19:44:18.694315 | orchestrator | Thursday 05 June 2025 19:41:11 +0000 (0:00:01.383) 0:02:51.303 ********* 2025-06-05 19:44:18.694326 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.694337 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.694348 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.694359 | orchestrator | 2025-06-05 19:44:18.694370 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-05 19:44:18.694381 | orchestrator | Thursday 05 June 2025 19:41:13 +0000 (0:00:01.815) 0:02:53.119 ********* 2025-06-05 19:44:18.694392 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.694403 | orchestrator | 2025-06-05 19:44:18.694414 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-05 19:44:18.694425 | orchestrator | Thursday 05 June 2025 19:41:14 +0000 (0:00:00.922) 0:02:54.041 ********* 2025-06-05 19:44:18.694437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-05 19:44:18.694449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-05 19:44:18.694471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-05 19:44:18.694567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694725 | orchestrator | 2025-06-05 19:44:18.694735 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-05 19:44:18.694745 | orchestrator | Thursday 05 June 2025 19:41:17 +0000 (0:00:03.232) 0:02:57.274 ********* 2025-06-05 19:44:18.694755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-05 19:44:18.694772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-05 19:44:18.694862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694914 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.694925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.694935 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.694945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-05 19:44:18.695017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.695032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.695042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.695052 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.695062 | orchestrator | 2025-06-05 19:44:18.695102 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-05 19:44:18.695113 | orchestrator | Thursday 05 June 2025 19:41:17 +0000 (0:00:00.566) 0:02:57.841 ********* 2025-06-05 19:44:18.695123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695143 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.695153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695173 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.695183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-05 19:44:18.695202 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.695212 | orchestrator | 2025-06-05 19:44:18.695222 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-05 19:44:18.695232 | orchestrator | Thursday 05 June 2025 19:41:18 +0000 (0:00:00.731) 0:02:58.572 ********* 2025-06-05 19:44:18.695241 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.695251 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.695261 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.695270 | orchestrator | 2025-06-05 19:44:18.695280 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-05 19:44:18.695289 | orchestrator | Thursday 05 June 2025 19:41:20 +0000 (0:00:01.419) 0:02:59.991 ********* 2025-06-05 19:44:18.695299 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.695309 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.695318 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.695328 | orchestrator | 2025-06-05 19:44:18.695337 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-05 19:44:18.695347 | orchestrator | Thursday 05 June 2025 19:41:21 +0000 (0:00:01.791) 0:03:01.782 ********* 2025-06-05 19:44:18.695357 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.695366 | orchestrator | 2025-06-05 19:44:18.695376 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-05 19:44:18.695385 | orchestrator | Thursday 05 June 2025 19:41:22 +0000 (0:00:00.991) 0:03:02.774 ********* 2025-06-05 19:44:18.695395 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-05 19:44:18.695405 | orchestrator | 2025-06-05 19:44:18.695414 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-05 19:44:18.695424 | orchestrator | Thursday 05 June 2025 19:41:26 +0000 (0:00:03.082) 0:03:05.857 ********* 2025-06-05 19:44:18.695503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.695526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.695536 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.695631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.695648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.695665 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.695676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.695688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.695698 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.695708 | orchestrator | 2025-06-05 19:44:18.695718 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-05 19:44:18.695728 | orchestrator | Thursday 05 June 2025 19:41:28 +0000 (0:00:02.209) 0:03:08.067 ********* 2025-06-05 19:44:18.695837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.695875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.695892 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.695908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.696029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.696064 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.696107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:44:18.696124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-05 19:44:18.696139 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.696155 | orchestrator | 2025-06-05 19:44:18.696170 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-05 19:44:18.696187 | orchestrator | Thursday 05 June 2025 19:41:29 +0000 (0:00:01.739) 0:03:09.807 ********* 2025-06-05 19:44:18.696203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696347 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.696358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696389 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.696399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-05 19:44:18.696409 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.696419 | orchestrator | 2025-06-05 19:44:18.696429 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-05 19:44:18.696439 | orchestrator | Thursday 05 June 2025 19:41:32 +0000 (0:00:02.081) 0:03:11.888 ********* 2025-06-05 19:44:18.696449 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.696459 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.696469 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.696478 | orchestrator | 2025-06-05 19:44:18.696488 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-05 19:44:18.696498 | orchestrator | Thursday 05 June 2025 19:41:33 +0000 (0:00:01.919) 0:03:13.808 ********* 2025-06-05 19:44:18.696509 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.696519 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.696537 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.696552 | orchestrator | 2025-06-05 19:44:18.696568 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-05 19:44:18.696592 | orchestrator | Thursday 05 June 2025 19:41:35 +0000 (0:00:01.212) 0:03:15.021 ********* 2025-06-05 19:44:18.696608 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.696626 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.696643 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.696660 | orchestrator | 2025-06-05 19:44:18.696678 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-05 19:44:18.696695 | orchestrator | Thursday 05 June 2025 19:41:35 +0000 (0:00:00.267) 0:03:15.288 ********* 2025-06-05 19:44:18.696713 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.696729 | orchestrator | 2025-06-05 19:44:18.696746 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-05 19:44:18.696762 | orchestrator | Thursday 05 June 2025 19:41:36 +0000 (0:00:00.988) 0:03:16.276 ********* 2025-06-05 19:44:18.696910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-05 19:44:18.696939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-05 19:44:18.696959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-05 19:44:18.696974 | orchestrator | 2025-06-05 19:44:18.696984 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-05 19:44:18.696993 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:01.637) 0:03:17.914 ********* 2025-06-05 19:44:18.697004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-05 19:44:18.697026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-05 19:44:18.697036 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.697046 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.697160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-05 19:44:18.697177 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.697187 | orchestrator | 2025-06-05 19:44:18.697197 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-05 19:44:18.697206 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:00.368) 0:03:18.282 ********* 2025-06-05 19:44:18.697217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-05 19:44:18.697228 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.697238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-05 19:44:18.697248 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.697258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-05 19:44:18.697268 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.697278 | orchestrator | 2025-06-05 19:44:18.697288 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-05 19:44:18.697297 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:00.497) 0:03:18.779 ********* 2025-06-05 19:44:18.697307 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.697317 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.697327 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.697336 | orchestrator | 2025-06-05 19:44:18.697346 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-05 19:44:18.697364 | orchestrator | Thursday 05 June 2025 19:41:39 +0000 (0:00:00.563) 0:03:19.343 ********* 2025-06-05 19:44:18.697374 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.697384 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.697394 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.697404 | orchestrator | 2025-06-05 19:44:18.697414 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-05 19:44:18.697424 | orchestrator | Thursday 05 June 2025 19:41:40 +0000 (0:00:01.164) 0:03:20.508 ********* 2025-06-05 19:44:18.697433 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.697443 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.697453 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.697463 | orchestrator | 2025-06-05 19:44:18.697473 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-05 19:44:18.697483 | orchestrator | Thursday 05 June 2025 19:41:41 +0000 (0:00:00.340) 0:03:20.848 ********* 2025-06-05 19:44:18.697493 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.697503 | orchestrator | 2025-06-05 19:44:18.697512 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-05 19:44:18.697522 | orchestrator | Thursday 05 June 2025 19:41:42 +0000 (0:00:01.486) 0:03:22.334 ********* 2025-06-05 19:44:18.697532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:44:18.697611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.697668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.697755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.697771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.697801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.697821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:44:18.697894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.697909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.697949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.698055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.698145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.698174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.698314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.698447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.698458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:44:18.698479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.698594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.698716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.698742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.698823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.698831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698839 | orchestrator | 2025-06-05 19:44:18.698847 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-05 19:44:18.698856 | orchestrator | Thursday 05 June 2025 19:41:47 +0000 (0:00:04.572) 0:03:26.907 ********* 2025-06-05 19:44:18.698864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:44:18.698873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:44:18.698935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.698992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.699084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.699094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:44:18.699221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-05 19:44:18.699454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.699578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.699593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699710 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.699719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699794 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.699803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-05 19:44:18.699819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-05 19:44:18.699843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:44:18.699879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.699889 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.699897 | orchestrator | 2025-06-05 19:44:18.699906 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-05 19:44:18.699914 | orchestrator | Thursday 05 June 2025 19:41:48 +0000 (0:00:01.321) 0:03:28.229 ********* 2025-06-05 19:44:18.699923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699940 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.699948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699972 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.699980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-05 19:44:18.699988 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.700002 | orchestrator | 2025-06-05 19:44:18.700010 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-05 19:44:18.700018 | orchestrator | Thursday 05 June 2025 19:41:50 +0000 (0:00:01.624) 0:03:29.853 ********* 2025-06-05 19:44:18.700026 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.700034 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.700042 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.700049 | orchestrator | 2025-06-05 19:44:18.700058 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-05 19:44:18.700066 | orchestrator | Thursday 05 June 2025 19:41:51 +0000 (0:00:01.326) 0:03:31.180 ********* 2025-06-05 19:44:18.700091 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.700099 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.700108 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.700115 | orchestrator | 2025-06-05 19:44:18.700123 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-05 19:44:18.700131 | orchestrator | Thursday 05 June 2025 19:41:53 +0000 (0:00:01.917) 0:03:33.097 ********* 2025-06-05 19:44:18.700139 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.700147 | orchestrator | 2025-06-05 19:44:18.700155 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-05 19:44:18.700163 | orchestrator | Thursday 05 June 2025 19:41:54 +0000 (0:00:01.067) 0:03:34.164 ********* 2025-06-05 19:44:18.700176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700235 | orchestrator | 2025-06-05 19:44:18.700244 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-05 19:44:18.700252 | orchestrator | Thursday 05 June 2025 19:41:57 +0000 (0:00:03.089) 0:03:37.253 ********* 2025-06-05 19:44:18.700260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.700268 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.700277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.700285 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.700318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.700329 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.700337 | orchestrator | 2025-06-05 19:44:18.700345 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-05 19:44:18.700353 | orchestrator | Thursday 05 June 2025 19:41:57 +0000 (0:00:00.418) 0:03:37.672 ********* 2025-06-05 19:44:18.700361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700383 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.700392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700408 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.700416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-05 19:44:18.700433 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.700441 | orchestrator | 2025-06-05 19:44:18.700449 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-05 19:44:18.700457 | orchestrator | Thursday 05 June 2025 19:41:58 +0000 (0:00:00.648) 0:03:38.321 ********* 2025-06-05 19:44:18.700465 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.700473 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.700481 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.700489 | orchestrator | 2025-06-05 19:44:18.700497 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-05 19:44:18.700505 | orchestrator | Thursday 05 June 2025 19:41:59 +0000 (0:00:01.502) 0:03:39.823 ********* 2025-06-05 19:44:18.700513 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.700521 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.700528 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.700536 | orchestrator | 2025-06-05 19:44:18.700545 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-05 19:44:18.700553 | orchestrator | Thursday 05 June 2025 19:42:02 +0000 (0:00:02.106) 0:03:41.929 ********* 2025-06-05 19:44:18.700561 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.700569 | orchestrator | 2025-06-05 19:44:18.700577 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-05 19:44:18.700585 | orchestrator | Thursday 05 June 2025 19:42:03 +0000 (0:00:01.252) 0:03:43.181 ********* 2025-06-05 19:44:18.700618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.700838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700856 | orchestrator | 2025-06-05 19:44:18.700865 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-05 19:44:18.700873 | orchestrator | Thursday 05 June 2025 19:42:07 +0000 (0:00:04.307) 0:03:47.489 ********* 2025-06-05 19:44:18.700882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.700918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700935 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700944 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.700953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.700962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.700979 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.701030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.701039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.701048 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701056 | orchestrator | 2025-06-05 19:44:18.701064 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-05 19:44:18.701122 | orchestrator | Thursday 05 June 2025 19:42:08 +0000 (0:00:00.957) 0:03:48.447 ********* 2025-06-05 19:44:18.701133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701168 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701268 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-05 19:44:18.701293 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701301 | orchestrator | 2025-06-05 19:44:18.701309 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-05 19:44:18.701317 | orchestrator | Thursday 05 June 2025 19:42:09 +0000 (0:00:00.848) 0:03:49.295 ********* 2025-06-05 19:44:18.701325 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.701333 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.701341 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.701348 | orchestrator | 2025-06-05 19:44:18.701354 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-05 19:44:18.701361 | orchestrator | Thursday 05 June 2025 19:42:11 +0000 (0:00:01.677) 0:03:50.973 ********* 2025-06-05 19:44:18.701368 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.701375 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.701382 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.701388 | orchestrator | 2025-06-05 19:44:18.701395 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-05 19:44:18.701402 | orchestrator | Thursday 05 June 2025 19:42:13 +0000 (0:00:01.894) 0:03:52.867 ********* 2025-06-05 19:44:18.701409 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.701416 | orchestrator | 2025-06-05 19:44:18.701422 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-05 19:44:18.701429 | orchestrator | Thursday 05 June 2025 19:42:14 +0000 (0:00:01.327) 0:03:54.195 ********* 2025-06-05 19:44:18.701436 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-05 19:44:18.701443 | orchestrator | 2025-06-05 19:44:18.701450 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-05 19:44:18.701457 | orchestrator | Thursday 05 June 2025 19:42:15 +0000 (0:00:00.775) 0:03:54.970 ********* 2025-06-05 19:44:18.701465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-05 19:44:18.701477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-05 19:44:18.701485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-05 19:44:18.701492 | orchestrator | 2025-06-05 19:44:18.701499 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-05 19:44:18.701506 | orchestrator | Thursday 05 June 2025 19:42:18 +0000 (0:00:03.320) 0:03:58.291 ********* 2025-06-05 19:44:18.701539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701548 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701563 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701577 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701584 | orchestrator | 2025-06-05 19:44:18.701591 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-05 19:44:18.701598 | orchestrator | Thursday 05 June 2025 19:42:19 +0000 (0:00:01.020) 0:03:59.312 ********* 2025-06-05 19:44:18.701605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701624 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701645 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-05 19:44:18.701666 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701673 | orchestrator | 2025-06-05 19:44:18.701679 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-05 19:44:18.701686 | orchestrator | Thursday 05 June 2025 19:42:20 +0000 (0:00:01.486) 0:04:00.798 ********* 2025-06-05 19:44:18.701693 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.701700 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.701706 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.701713 | orchestrator | 2025-06-05 19:44:18.701720 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-05 19:44:18.701727 | orchestrator | Thursday 05 June 2025 19:42:23 +0000 (0:00:02.118) 0:04:02.917 ********* 2025-06-05 19:44:18.701733 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.701740 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.701747 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.701754 | orchestrator | 2025-06-05 19:44:18.701761 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-05 19:44:18.701768 | orchestrator | Thursday 05 June 2025 19:42:25 +0000 (0:00:02.778) 0:04:05.695 ********* 2025-06-05 19:44:18.701778 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-05 19:44:18.701785 | orchestrator | 2025-06-05 19:44:18.701792 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-05 19:44:18.701818 | orchestrator | Thursday 05 June 2025 19:42:26 +0000 (0:00:00.782) 0:04:06.478 ********* 2025-06-05 19:44:18.701827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701834 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701848 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701867 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701874 | orchestrator | 2025-06-05 19:44:18.701881 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-05 19:44:18.701887 | orchestrator | Thursday 05 June 2025 19:42:27 +0000 (0:00:01.224) 0:04:07.703 ********* 2025-06-05 19:44:18.701894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701901 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701915 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-05 19:44:18.701929 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701936 | orchestrator | 2025-06-05 19:44:18.701943 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-05 19:44:18.701949 | orchestrator | Thursday 05 June 2025 19:42:29 +0000 (0:00:01.513) 0:04:09.216 ********* 2025-06-05 19:44:18.701956 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.701963 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.701970 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.701976 | orchestrator | 2025-06-05 19:44:18.701989 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-05 19:44:18.702050 | orchestrator | Thursday 05 June 2025 19:42:30 +0000 (0:00:01.148) 0:04:10.364 ********* 2025-06-05 19:44:18.702061 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.702083 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.702092 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.702099 | orchestrator | 2025-06-05 19:44:18.702106 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-05 19:44:18.702112 | orchestrator | Thursday 05 June 2025 19:42:32 +0000 (0:00:02.385) 0:04:12.750 ********* 2025-06-05 19:44:18.702120 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.702127 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.702140 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.702146 | orchestrator | 2025-06-05 19:44:18.702153 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-05 19:44:18.702160 | orchestrator | Thursday 05 June 2025 19:42:36 +0000 (0:00:03.151) 0:04:15.901 ********* 2025-06-05 19:44:18.702167 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-05 19:44:18.702174 | orchestrator | 2025-06-05 19:44:18.702181 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-05 19:44:18.702188 | orchestrator | Thursday 05 June 2025 19:42:37 +0000 (0:00:01.015) 0:04:16.917 ********* 2025-06-05 19:44:18.702195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702203 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.702210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702217 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.702223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702230 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.702237 | orchestrator | 2025-06-05 19:44:18.702244 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-05 19:44:18.702251 | orchestrator | Thursday 05 June 2025 19:42:38 +0000 (0:00:01.031) 0:04:17.948 ********* 2025-06-05 19:44:18.702258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702265 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.702272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702284 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.702320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-05 19:44:18.702329 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.702336 | orchestrator | 2025-06-05 19:44:18.702343 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-05 19:44:18.702350 | orchestrator | Thursday 05 June 2025 19:42:39 +0000 (0:00:01.289) 0:04:19.238 ********* 2025-06-05 19:44:18.702357 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.702363 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.702370 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.702377 | orchestrator | 2025-06-05 19:44:18.702383 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-05 19:44:18.702390 | orchestrator | Thursday 05 June 2025 19:42:41 +0000 (0:00:01.743) 0:04:20.982 ********* 2025-06-05 19:44:18.702397 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.702404 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.702410 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.702417 | orchestrator | 2025-06-05 19:44:18.702424 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-05 19:44:18.702431 | orchestrator | Thursday 05 June 2025 19:42:43 +0000 (0:00:02.353) 0:04:23.335 ********* 2025-06-05 19:44:18.702438 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.702444 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.702451 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.702457 | orchestrator | 2025-06-05 19:44:18.702464 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-05 19:44:18.702471 | orchestrator | Thursday 05 June 2025 19:42:46 +0000 (0:00:03.299) 0:04:26.635 ********* 2025-06-05 19:44:18.702477 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.702484 | orchestrator | 2025-06-05 19:44:18.702491 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-05 19:44:18.702498 | orchestrator | Thursday 05 June 2025 19:42:48 +0000 (0:00:01.277) 0:04:27.913 ********* 2025-06-05 19:44:18.702505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.702512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.702557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.702645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702679 | orchestrator | 2025-06-05 19:44:18.702686 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-05 19:44:18.702693 | orchestrator | Thursday 05 June 2025 19:42:51 +0000 (0:00:03.471) 0:04:31.385 ********* 2025-06-05 19:44:18.702723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.702732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702767 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.702774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.702804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702834 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.702846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.702853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-05 19:44:18.702884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-05 19:44:18.702899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:44:18.702906 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.702913 | orchestrator | 2025-06-05 19:44:18.702920 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-05 19:44:18.702927 | orchestrator | Thursday 05 June 2025 19:42:52 +0000 (0:00:00.610) 0:04:31.995 ********* 2025-06-05 19:44:18.702934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702952 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.702959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702973 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.702980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-05 19:44:18.702994 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.703001 | orchestrator | 2025-06-05 19:44:18.703007 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-05 19:44:18.703014 | orchestrator | Thursday 05 June 2025 19:42:52 +0000 (0:00:00.845) 0:04:32.841 ********* 2025-06-05 19:44:18.703021 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.703027 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.703034 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.703041 | orchestrator | 2025-06-05 19:44:18.703048 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-05 19:44:18.703054 | orchestrator | Thursday 05 June 2025 19:42:54 +0000 (0:00:01.534) 0:04:34.375 ********* 2025-06-05 19:44:18.703061 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.703067 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.703090 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.703097 | orchestrator | 2025-06-05 19:44:18.703104 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-05 19:44:18.703110 | orchestrator | Thursday 05 June 2025 19:42:56 +0000 (0:00:01.976) 0:04:36.352 ********* 2025-06-05 19:44:18.703117 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.703124 | orchestrator | 2025-06-05 19:44:18.703131 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-05 19:44:18.703144 | orchestrator | Thursday 05 June 2025 19:42:57 +0000 (0:00:01.225) 0:04:37.577 ********* 2025-06-05 19:44:18.703175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:44:18.703184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:44:18.703197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:44:18.703205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:44:18.703236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:44:18.703246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:44:18.703258 | orchestrator | 2025-06-05 19:44:18.703265 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-05 19:44:18.703272 | orchestrator | Thursday 05 June 2025 19:43:02 +0000 (0:00:04.657) 0:04:42.235 ********* 2025-06-05 19:44:18.703279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:44:18.703287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:44:18.703294 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.703327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:44:18.703336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:44:18.703347 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.703354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:44:18.703362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:44:18.703369 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.703377 | orchestrator | 2025-06-05 19:44:18.703383 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-05 19:44:18.703390 | orchestrator | Thursday 05 June 2025 19:43:03 +0000 (0:00:00.782) 0:04:43.018 ********* 2025-06-05 19:44:18.703397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-05 19:44:18.703427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703444 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.703451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-05 19:44:18.703463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703477 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.703484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-05 19:44:18.703490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-05 19:44:18.703504 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.703511 | orchestrator | 2025-06-05 19:44:18.703518 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-05 19:44:18.703525 | orchestrator | Thursday 05 June 2025 19:43:03 +0000 (0:00:00.736) 0:04:43.754 ********* 2025-06-05 19:44:18.703532 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.703538 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.703545 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.703552 | orchestrator | 2025-06-05 19:44:18.703559 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-05 19:44:18.703565 | orchestrator | Thursday 05 June 2025 19:43:04 +0000 (0:00:00.402) 0:04:44.156 ********* 2025-06-05 19:44:18.703572 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.703579 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.703586 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.703593 | orchestrator | 2025-06-05 19:44:18.703599 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-05 19:44:18.703606 | orchestrator | Thursday 05 June 2025 19:43:05 +0000 (0:00:01.150) 0:04:45.307 ********* 2025-06-05 19:44:18.703613 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.703620 | orchestrator | 2025-06-05 19:44:18.703627 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-05 19:44:18.703634 | orchestrator | Thursday 05 June 2025 19:43:06 +0000 (0:00:01.470) 0:04:46.777 ********* 2025-06-05 19:44:18.703641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:44:18.703673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:44:18.703688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.703695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.703703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:44:18.703785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.703792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:44:18.703851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.703859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:44:18.703885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.703909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:44:18.703943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.703959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.703974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.703981 | orchestrator | 2025-06-05 19:44:18.703988 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-05 19:44:18.703995 | orchestrator | Thursday 05 June 2025 19:43:10 +0000 (0:00:03.873) 0:04:50.651 ********* 2025-06-05 19:44:18.704002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-05 19:44:18.704009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.704021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704042 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-05 19:44:18.704057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.704065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704108 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-05 19:44:18.704130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.704137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-05 19:44:18.704178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-05 19:44:18.704186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:44:18.704193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.704200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-05 19:44:18.704255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704262 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-05 19:44:18.704281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:44:18.704301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:44:18.704308 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704315 | orchestrator | 2025-06-05 19:44:18.704322 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-05 19:44:18.704329 | orchestrator | Thursday 05 June 2025 19:43:11 +0000 (0:00:00.939) 0:04:51.590 ********* 2025-06-05 19:44:18.704336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704366 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704406 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-05 19:44:18.704427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-05 19:44:18.704441 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704448 | orchestrator | 2025-06-05 19:44:18.704455 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-05 19:44:18.704462 | orchestrator | Thursday 05 June 2025 19:43:12 +0000 (0:00:00.855) 0:04:52.446 ********* 2025-06-05 19:44:18.704469 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704476 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704482 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704489 | orchestrator | 2025-06-05 19:44:18.704499 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-05 19:44:18.704506 | orchestrator | Thursday 05 June 2025 19:43:12 +0000 (0:00:00.354) 0:04:52.800 ********* 2025-06-05 19:44:18.704516 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704523 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704530 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704537 | orchestrator | 2025-06-05 19:44:18.704544 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-05 19:44:18.704551 | orchestrator | Thursday 05 June 2025 19:43:14 +0000 (0:00:01.342) 0:04:54.143 ********* 2025-06-05 19:44:18.704558 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.704565 | orchestrator | 2025-06-05 19:44:18.704571 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-05 19:44:18.704578 | orchestrator | Thursday 05 June 2025 19:43:15 +0000 (0:00:01.511) 0:04:55.655 ********* 2025-06-05 19:44:18.704585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:44:18.704602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:44:18.704610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-05 19:44:18.704618 | orchestrator | 2025-06-05 19:44:18.704624 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-05 19:44:18.704631 | orchestrator | Thursday 05 June 2025 19:43:18 +0000 (0:00:02.293) 0:04:57.949 ********* 2025-06-05 19:44:18.704646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-05 19:44:18.704659 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-05 19:44:18.704673 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-05 19:44:18.704688 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704694 | orchestrator | 2025-06-05 19:44:18.704701 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-05 19:44:18.704708 | orchestrator | Thursday 05 June 2025 19:43:18 +0000 (0:00:00.322) 0:04:58.272 ********* 2025-06-05 19:44:18.704715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-05 19:44:18.704722 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-05 19:44:18.704736 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-05 19:44:18.704753 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704760 | orchestrator | 2025-06-05 19:44:18.704767 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-05 19:44:18.704774 | orchestrator | Thursday 05 June 2025 19:43:19 +0000 (0:00:00.758) 0:04:59.031 ********* 2025-06-05 19:44:18.704781 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704788 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704795 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704801 | orchestrator | 2025-06-05 19:44:18.704808 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-05 19:44:18.704815 | orchestrator | Thursday 05 June 2025 19:43:19 +0000 (0:00:00.374) 0:04:59.405 ********* 2025-06-05 19:44:18.704822 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.704834 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.704841 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.704847 | orchestrator | 2025-06-05 19:44:18.704854 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-05 19:44:18.704867 | orchestrator | Thursday 05 June 2025 19:43:20 +0000 (0:00:01.225) 0:05:00.631 ********* 2025-06-05 19:44:18.704878 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:44:18.704889 | orchestrator | 2025-06-05 19:44:18.704900 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-05 19:44:18.704910 | orchestrator | Thursday 05 June 2025 19:43:22 +0000 (0:00:01.716) 0:05:02.347 ********* 2025-06-05 19:44:18.704922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.704934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.704968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.704981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.704998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.705006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-05 19:44:18.705013 | orchestrator | 2025-06-05 19:44:18.705020 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-05 19:44:18.705027 | orchestrator | Thursday 05 June 2025 19:43:28 +0000 (0:00:05.888) 0:05:08.236 ********* 2025-06-05 19:44:18.705034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705053 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705099 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-05 19:44:18.705124 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705131 | orchestrator | 2025-06-05 19:44:18.705138 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-05 19:44:18.705145 | orchestrator | Thursday 05 June 2025 19:43:29 +0000 (0:00:00.613) 0:05:08.849 ********* 2025-06-05 19:44:18.705155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705188 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705222 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-05 19:44:18.705257 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705264 | orchestrator | 2025-06-05 19:44:18.705275 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-05 19:44:18.705282 | orchestrator | Thursday 05 June 2025 19:43:30 +0000 (0:00:01.584) 0:05:10.433 ********* 2025-06-05 19:44:18.705289 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.705296 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.705302 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.705309 | orchestrator | 2025-06-05 19:44:18.705316 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-05 19:44:18.705323 | orchestrator | Thursday 05 June 2025 19:43:31 +0000 (0:00:01.348) 0:05:11.782 ********* 2025-06-05 19:44:18.705329 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.705336 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.705343 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.705349 | orchestrator | 2025-06-05 19:44:18.705356 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-05 19:44:18.705363 | orchestrator | Thursday 05 June 2025 19:43:34 +0000 (0:00:02.153) 0:05:13.936 ********* 2025-06-05 19:44:18.705370 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705377 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705383 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705390 | orchestrator | 2025-06-05 19:44:18.705397 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-05 19:44:18.705403 | orchestrator | Thursday 05 June 2025 19:43:34 +0000 (0:00:00.323) 0:05:14.259 ********* 2025-06-05 19:44:18.705410 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705417 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705423 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705430 | orchestrator | 2025-06-05 19:44:18.705437 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-05 19:44:18.705444 | orchestrator | Thursday 05 June 2025 19:43:35 +0000 (0:00:00.615) 0:05:14.875 ********* 2025-06-05 19:44:18.705450 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705457 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705464 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705471 | orchestrator | 2025-06-05 19:44:18.705481 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-05 19:44:18.705488 | orchestrator | Thursday 05 June 2025 19:43:35 +0000 (0:00:00.308) 0:05:15.183 ********* 2025-06-05 19:44:18.705498 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705505 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705512 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705518 | orchestrator | 2025-06-05 19:44:18.705525 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-05 19:44:18.705532 | orchestrator | Thursday 05 June 2025 19:43:35 +0000 (0:00:00.308) 0:05:15.492 ********* 2025-06-05 19:44:18.705539 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705546 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705553 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705559 | orchestrator | 2025-06-05 19:44:18.705566 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-05 19:44:18.705573 | orchestrator | Thursday 05 June 2025 19:43:35 +0000 (0:00:00.305) 0:05:15.797 ********* 2025-06-05 19:44:18.705580 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.705586 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.705593 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.705600 | orchestrator | 2025-06-05 19:44:18.705607 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-05 19:44:18.705613 | orchestrator | Thursday 05 June 2025 19:43:36 +0000 (0:00:00.830) 0:05:16.628 ********* 2025-06-05 19:44:18.705620 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705627 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705634 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705641 | orchestrator | 2025-06-05 19:44:18.705648 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-05 19:44:18.705659 | orchestrator | Thursday 05 June 2025 19:43:37 +0000 (0:00:00.675) 0:05:17.303 ********* 2025-06-05 19:44:18.705666 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705673 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705679 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705686 | orchestrator | 2025-06-05 19:44:18.705693 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-05 19:44:18.705700 | orchestrator | Thursday 05 June 2025 19:43:37 +0000 (0:00:00.322) 0:05:17.626 ********* 2025-06-05 19:44:18.705706 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705713 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705720 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705726 | orchestrator | 2025-06-05 19:44:18.705733 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-05 19:44:18.705740 | orchestrator | Thursday 05 June 2025 19:43:38 +0000 (0:00:01.167) 0:05:18.793 ********* 2025-06-05 19:44:18.705747 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705754 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705760 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705767 | orchestrator | 2025-06-05 19:44:18.705774 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-05 19:44:18.705781 | orchestrator | Thursday 05 June 2025 19:43:39 +0000 (0:00:00.904) 0:05:19.698 ********* 2025-06-05 19:44:18.705787 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705794 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705800 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705807 | orchestrator | 2025-06-05 19:44:18.705814 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-05 19:44:18.705821 | orchestrator | Thursday 05 June 2025 19:43:40 +0000 (0:00:00.888) 0:05:20.586 ********* 2025-06-05 19:44:18.705828 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.705834 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.705841 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.705848 | orchestrator | 2025-06-05 19:44:18.705855 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-05 19:44:18.705861 | orchestrator | Thursday 05 June 2025 19:43:50 +0000 (0:00:09.536) 0:05:30.123 ********* 2025-06-05 19:44:18.705868 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705875 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705882 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705888 | orchestrator | 2025-06-05 19:44:18.705895 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-05 19:44:18.705902 | orchestrator | Thursday 05 June 2025 19:43:50 +0000 (0:00:00.718) 0:05:30.841 ********* 2025-06-05 19:44:18.705909 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.705916 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.705922 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.705929 | orchestrator | 2025-06-05 19:44:18.705936 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-05 19:44:18.705943 | orchestrator | Thursday 05 June 2025 19:43:58 +0000 (0:00:07.828) 0:05:38.670 ********* 2025-06-05 19:44:18.705949 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.705956 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.705963 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.705969 | orchestrator | 2025-06-05 19:44:18.705976 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-05 19:44:18.705983 | orchestrator | Thursday 05 June 2025 19:44:02 +0000 (0:00:03.717) 0:05:42.387 ********* 2025-06-05 19:44:18.705990 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:44:18.705996 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:44:18.706003 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:44:18.706010 | orchestrator | 2025-06-05 19:44:18.706038 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-05 19:44:18.706047 | orchestrator | Thursday 05 June 2025 19:44:11 +0000 (0:00:09.277) 0:05:51.665 ********* 2025-06-05 19:44:18.706059 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706065 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706089 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706096 | orchestrator | 2025-06-05 19:44:18.706103 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-05 19:44:18.706110 | orchestrator | Thursday 05 June 2025 19:44:12 +0000 (0:00:00.313) 0:05:51.978 ********* 2025-06-05 19:44:18.706116 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706123 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706129 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706136 | orchestrator | 2025-06-05 19:44:18.706143 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-05 19:44:18.706157 | orchestrator | Thursday 05 June 2025 19:44:12 +0000 (0:00:00.661) 0:05:52.640 ********* 2025-06-05 19:44:18.706164 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706170 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706181 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706189 | orchestrator | 2025-06-05 19:44:18.706195 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-05 19:44:18.706202 | orchestrator | Thursday 05 June 2025 19:44:13 +0000 (0:00:00.326) 0:05:52.966 ********* 2025-06-05 19:44:18.706209 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706215 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706222 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706229 | orchestrator | 2025-06-05 19:44:18.706236 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-05 19:44:18.706242 | orchestrator | Thursday 05 June 2025 19:44:13 +0000 (0:00:00.323) 0:05:53.289 ********* 2025-06-05 19:44:18.706249 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706256 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706262 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706269 | orchestrator | 2025-06-05 19:44:18.706276 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-05 19:44:18.706282 | orchestrator | Thursday 05 June 2025 19:44:13 +0000 (0:00:00.325) 0:05:53.615 ********* 2025-06-05 19:44:18.706289 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:44:18.706296 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:44:18.706303 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:44:18.706310 | orchestrator | 2025-06-05 19:44:18.706316 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-05 19:44:18.706323 | orchestrator | Thursday 05 June 2025 19:44:14 +0000 (0:00:00.668) 0:05:54.284 ********* 2025-06-05 19:44:18.706330 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.706337 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.706343 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.706350 | orchestrator | 2025-06-05 19:44:18.706357 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-05 19:44:18.706363 | orchestrator | Thursday 05 June 2025 19:44:15 +0000 (0:00:00.881) 0:05:55.165 ********* 2025-06-05 19:44:18.706370 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:44:18.706377 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:44:18.706383 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:44:18.706390 | orchestrator | 2025-06-05 19:44:18.706396 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:44:18.706403 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-05 19:44:18.706410 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-05 19:44:18.706417 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-05 19:44:18.706424 | orchestrator | 2025-06-05 19:44:18.706431 | orchestrator | 2025-06-05 19:44:18.706443 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:44:18.706450 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.843) 0:05:56.009 ********* 2025-06-05 19:44:18.706457 | orchestrator | =============================================================================== 2025-06-05 19:44:18.706463 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.54s 2025-06-05 19:44:18.706470 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.28s 2025-06-05 19:44:18.706477 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.83s 2025-06-05 19:44:18.706483 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.89s 2025-06-05 19:44:18.706490 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 5.73s 2025-06-05 19:44:18.706497 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.69s 2025-06-05 19:44:18.706504 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 4.66s 2025-06-05 19:44:18.706510 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.64s 2025-06-05 19:44:18.706517 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.57s 2025-06-05 19:44:18.706524 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.31s 2025-06-05 19:44:18.706530 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.12s 2025-06-05 19:44:18.706537 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.98s 2025-06-05 19:44:18.706544 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.89s 2025-06-05 19:44:18.706550 | orchestrator | haproxy-config : Copying over grafana haproxy config -------------------- 3.89s 2025-06-05 19:44:18.706557 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.87s 2025-06-05 19:44:18.706563 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.76s 2025-06-05 19:44:18.706570 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.75s 2025-06-05 19:44:18.706577 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.72s 2025-06-05 19:44:18.706583 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.63s 2025-06-05 19:44:18.706590 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.58s 2025-06-05 19:44:18.706597 | orchestrator | 2025-06-05 19:44:18 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:18.706607 | orchestrator | 2025-06-05 19:44:18 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:18.706617 | orchestrator | 2025-06-05 19:44:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:21.732690 | orchestrator | 2025-06-05 19:44:21 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:21.733477 | orchestrator | 2025-06-05 19:44:21 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:21.734463 | orchestrator | 2025-06-05 19:44:21 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:21.734512 | orchestrator | 2025-06-05 19:44:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:24.780713 | orchestrator | 2025-06-05 19:44:24 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:24.782072 | orchestrator | 2025-06-05 19:44:24 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:24.783985 | orchestrator | 2025-06-05 19:44:24 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:24.784021 | orchestrator | 2025-06-05 19:44:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:27.823915 | orchestrator | 2025-06-05 19:44:27 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:27.824271 | orchestrator | 2025-06-05 19:44:27 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:27.825127 | orchestrator | 2025-06-05 19:44:27 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:27.825152 | orchestrator | 2025-06-05 19:44:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:30.855074 | orchestrator | 2025-06-05 19:44:30 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:30.855521 | orchestrator | 2025-06-05 19:44:30 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:30.856411 | orchestrator | 2025-06-05 19:44:30 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:30.856662 | orchestrator | 2025-06-05 19:44:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:33.899625 | orchestrator | 2025-06-05 19:44:33 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:33.899748 | orchestrator | 2025-06-05 19:44:33 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:33.899765 | orchestrator | 2025-06-05 19:44:33 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:33.899778 | orchestrator | 2025-06-05 19:44:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:36.925340 | orchestrator | 2025-06-05 19:44:36 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:36.926238 | orchestrator | 2025-06-05 19:44:36 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:36.926943 | orchestrator | 2025-06-05 19:44:36 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:36.926969 | orchestrator | 2025-06-05 19:44:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:39.961290 | orchestrator | 2025-06-05 19:44:39 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:39.961413 | orchestrator | 2025-06-05 19:44:39 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:39.961904 | orchestrator | 2025-06-05 19:44:39 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:39.961928 | orchestrator | 2025-06-05 19:44:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:42.997800 | orchestrator | 2025-06-05 19:44:42 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:43.004763 | orchestrator | 2025-06-05 19:44:43 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:43.006500 | orchestrator | 2025-06-05 19:44:43 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:43.006570 | orchestrator | 2025-06-05 19:44:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:46.052664 | orchestrator | 2025-06-05 19:44:46 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:46.053539 | orchestrator | 2025-06-05 19:44:46 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:46.057241 | orchestrator | 2025-06-05 19:44:46 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:46.057286 | orchestrator | 2025-06-05 19:44:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:49.094503 | orchestrator | 2025-06-05 19:44:49 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:49.095881 | orchestrator | 2025-06-05 19:44:49 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:49.097198 | orchestrator | 2025-06-05 19:44:49 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:49.097284 | orchestrator | 2025-06-05 19:44:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:52.143292 | orchestrator | 2025-06-05 19:44:52 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:52.144919 | orchestrator | 2025-06-05 19:44:52 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:52.147077 | orchestrator | 2025-06-05 19:44:52 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:52.147110 | orchestrator | 2025-06-05 19:44:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:55.180383 | orchestrator | 2025-06-05 19:44:55 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:55.181874 | orchestrator | 2025-06-05 19:44:55 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:55.183075 | orchestrator | 2025-06-05 19:44:55 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:55.183265 | orchestrator | 2025-06-05 19:44:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:44:58.243318 | orchestrator | 2025-06-05 19:44:58 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:44:58.245433 | orchestrator | 2025-06-05 19:44:58 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:44:58.246528 | orchestrator | 2025-06-05 19:44:58 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:44:58.246557 | orchestrator | 2025-06-05 19:44:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:01.291026 | orchestrator | 2025-06-05 19:45:01 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:01.292477 | orchestrator | 2025-06-05 19:45:01 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:01.294219 | orchestrator | 2025-06-05 19:45:01 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:01.294262 | orchestrator | 2025-06-05 19:45:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:04.350512 | orchestrator | 2025-06-05 19:45:04 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:04.354219 | orchestrator | 2025-06-05 19:45:04 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:04.354418 | orchestrator | 2025-06-05 19:45:04 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:04.354446 | orchestrator | 2025-06-05 19:45:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:07.400216 | orchestrator | 2025-06-05 19:45:07 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:07.401312 | orchestrator | 2025-06-05 19:45:07 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:07.403129 | orchestrator | 2025-06-05 19:45:07 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:07.403850 | orchestrator | 2025-06-05 19:45:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:10.452406 | orchestrator | 2025-06-05 19:45:10 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:10.454204 | orchestrator | 2025-06-05 19:45:10 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:10.456759 | orchestrator | 2025-06-05 19:45:10 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:10.457032 | orchestrator | 2025-06-05 19:45:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:13.502230 | orchestrator | 2025-06-05 19:45:13 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:13.503981 | orchestrator | 2025-06-05 19:45:13 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:13.505706 | orchestrator | 2025-06-05 19:45:13 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:13.505738 | orchestrator | 2025-06-05 19:45:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:16.557145 | orchestrator | 2025-06-05 19:45:16 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:16.557368 | orchestrator | 2025-06-05 19:45:16 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:16.558824 | orchestrator | 2025-06-05 19:45:16 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:16.558852 | orchestrator | 2025-06-05 19:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:19.594995 | orchestrator | 2025-06-05 19:45:19 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:19.595723 | orchestrator | 2025-06-05 19:45:19 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:19.596688 | orchestrator | 2025-06-05 19:45:19 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:19.596719 | orchestrator | 2025-06-05 19:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:22.632218 | orchestrator | 2025-06-05 19:45:22 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:22.633991 | orchestrator | 2025-06-05 19:45:22 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:22.635385 | orchestrator | 2025-06-05 19:45:22 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:22.635421 | orchestrator | 2025-06-05 19:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:25.687717 | orchestrator | 2025-06-05 19:45:25 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:25.689669 | orchestrator | 2025-06-05 19:45:25 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:25.692014 | orchestrator | 2025-06-05 19:45:25 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:25.692271 | orchestrator | 2025-06-05 19:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:28.742976 | orchestrator | 2025-06-05 19:45:28 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:28.743983 | orchestrator | 2025-06-05 19:45:28 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:28.745679 | orchestrator | 2025-06-05 19:45:28 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:28.745709 | orchestrator | 2025-06-05 19:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:31.806390 | orchestrator | 2025-06-05 19:45:31 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:31.806941 | orchestrator | 2025-06-05 19:45:31 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:31.808089 | orchestrator | 2025-06-05 19:45:31 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:31.808117 | orchestrator | 2025-06-05 19:45:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:34.859955 | orchestrator | 2025-06-05 19:45:34 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:34.861026 | orchestrator | 2025-06-05 19:45:34 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:34.863449 | orchestrator | 2025-06-05 19:45:34 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:34.863481 | orchestrator | 2025-06-05 19:45:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:37.909898 | orchestrator | 2025-06-05 19:45:37 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:37.910147 | orchestrator | 2025-06-05 19:45:37 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:37.910174 | orchestrator | 2025-06-05 19:45:37 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:37.913262 | orchestrator | 2025-06-05 19:45:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:40.964785 | orchestrator | 2025-06-05 19:45:40 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:40.967429 | orchestrator | 2025-06-05 19:45:40 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:40.968475 | orchestrator | 2025-06-05 19:45:40 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:40.970395 | orchestrator | 2025-06-05 19:45:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:44.022535 | orchestrator | 2025-06-05 19:45:44 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:44.024083 | orchestrator | 2025-06-05 19:45:44 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:44.026129 | orchestrator | 2025-06-05 19:45:44 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:44.026354 | orchestrator | 2025-06-05 19:45:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:47.065327 | orchestrator | 2025-06-05 19:45:47 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:47.066977 | orchestrator | 2025-06-05 19:45:47 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:47.068870 | orchestrator | 2025-06-05 19:45:47 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:47.068955 | orchestrator | 2025-06-05 19:45:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:50.109981 | orchestrator | 2025-06-05 19:45:50 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:50.111647 | orchestrator | 2025-06-05 19:45:50 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:50.113450 | orchestrator | 2025-06-05 19:45:50 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:50.113803 | orchestrator | 2025-06-05 19:45:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:53.159306 | orchestrator | 2025-06-05 19:45:53 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:53.159410 | orchestrator | 2025-06-05 19:45:53 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:53.162199 | orchestrator | 2025-06-05 19:45:53 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:53.162306 | orchestrator | 2025-06-05 19:45:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:56.207372 | orchestrator | 2025-06-05 19:45:56 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:56.208642 | orchestrator | 2025-06-05 19:45:56 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:56.210224 | orchestrator | 2025-06-05 19:45:56 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:56.210372 | orchestrator | 2025-06-05 19:45:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:45:59.258670 | orchestrator | 2025-06-05 19:45:59 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:45:59.259491 | orchestrator | 2025-06-05 19:45:59 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:45:59.260706 | orchestrator | 2025-06-05 19:45:59 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:45:59.260900 | orchestrator | 2025-06-05 19:45:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:02.318204 | orchestrator | 2025-06-05 19:46:02 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:02.319563 | orchestrator | 2025-06-05 19:46:02 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:02.320923 | orchestrator | 2025-06-05 19:46:02 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:02.320951 | orchestrator | 2025-06-05 19:46:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:05.370122 | orchestrator | 2025-06-05 19:46:05 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:05.370547 | orchestrator | 2025-06-05 19:46:05 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:05.372542 | orchestrator | 2025-06-05 19:46:05 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:05.372589 | orchestrator | 2025-06-05 19:46:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:08.418310 | orchestrator | 2025-06-05 19:46:08 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:08.419743 | orchestrator | 2025-06-05 19:46:08 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:08.421692 | orchestrator | 2025-06-05 19:46:08 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:08.421979 | orchestrator | 2025-06-05 19:46:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:11.486322 | orchestrator | 2025-06-05 19:46:11 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:11.488066 | orchestrator | 2025-06-05 19:46:11 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:11.491294 | orchestrator | 2025-06-05 19:46:11 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:11.491900 | orchestrator | 2025-06-05 19:46:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:14.537640 | orchestrator | 2025-06-05 19:46:14 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:14.538987 | orchestrator | 2025-06-05 19:46:14 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:14.539506 | orchestrator | 2025-06-05 19:46:14 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:14.539546 | orchestrator | 2025-06-05 19:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:17.601382 | orchestrator | 2025-06-05 19:46:17 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:17.603211 | orchestrator | 2025-06-05 19:46:17 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:17.605162 | orchestrator | 2025-06-05 19:46:17 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:17.605378 | orchestrator | 2025-06-05 19:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:20.650643 | orchestrator | 2025-06-05 19:46:20 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:20.651394 | orchestrator | 2025-06-05 19:46:20 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:20.653278 | orchestrator | 2025-06-05 19:46:20 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:20.653311 | orchestrator | 2025-06-05 19:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:23.699287 | orchestrator | 2025-06-05 19:46:23 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:23.699825 | orchestrator | 2025-06-05 19:46:23 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:23.701435 | orchestrator | 2025-06-05 19:46:23 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:23.701530 | orchestrator | 2025-06-05 19:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:26.745007 | orchestrator | 2025-06-05 19:46:26 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:26.746703 | orchestrator | 2025-06-05 19:46:26 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:26.748661 | orchestrator | 2025-06-05 19:46:26 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:26.748688 | orchestrator | 2025-06-05 19:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:29.790337 | orchestrator | 2025-06-05 19:46:29 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:29.792046 | orchestrator | 2025-06-05 19:46:29 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:29.793796 | orchestrator | 2025-06-05 19:46:29 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:29.793861 | orchestrator | 2025-06-05 19:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:32.846551 | orchestrator | 2025-06-05 19:46:32 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:32.848163 | orchestrator | 2025-06-05 19:46:32 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:32.850502 | orchestrator | 2025-06-05 19:46:32 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:32.850531 | orchestrator | 2025-06-05 19:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:35.894619 | orchestrator | 2025-06-05 19:46:35 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:35.896138 | orchestrator | 2025-06-05 19:46:35 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:35.897883 | orchestrator | 2025-06-05 19:46:35 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:35.897964 | orchestrator | 2025-06-05 19:46:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:38.944421 | orchestrator | 2025-06-05 19:46:38 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:38.946726 | orchestrator | 2025-06-05 19:46:38 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:38.949184 | orchestrator | 2025-06-05 19:46:38 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:38.949284 | orchestrator | 2025-06-05 19:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:41.994377 | orchestrator | 2025-06-05 19:46:41 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:41.997178 | orchestrator | 2025-06-05 19:46:41 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:42.000636 | orchestrator | 2025-06-05 19:46:42 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:42.000671 | orchestrator | 2025-06-05 19:46:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:45.040231 | orchestrator | 2025-06-05 19:46:45 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:45.041475 | orchestrator | 2025-06-05 19:46:45 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:45.044086 | orchestrator | 2025-06-05 19:46:45 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:45.044259 | orchestrator | 2025-06-05 19:46:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:48.086659 | orchestrator | 2025-06-05 19:46:48 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:48.087507 | orchestrator | 2025-06-05 19:46:48 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:48.089173 | orchestrator | 2025-06-05 19:46:48 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:48.089284 | orchestrator | 2025-06-05 19:46:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:51.128867 | orchestrator | 2025-06-05 19:46:51 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:51.130247 | orchestrator | 2025-06-05 19:46:51 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:51.132044 | orchestrator | 2025-06-05 19:46:51 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:51.132320 | orchestrator | 2025-06-05 19:46:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:54.179693 | orchestrator | 2025-06-05 19:46:54 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:54.180670 | orchestrator | 2025-06-05 19:46:54 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:54.182224 | orchestrator | 2025-06-05 19:46:54 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state STARTED 2025-06-05 19:46:54.182362 | orchestrator | 2025-06-05 19:46:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:46:57.240637 | orchestrator | 2025-06-05 19:46:57 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:46:57.242534 | orchestrator | 2025-06-05 19:46:57 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:46:57.244476 | orchestrator | 2025-06-05 19:46:57 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:46:57.250502 | orchestrator | 2025-06-05 19:46:57 | INFO  | Task 6f75b2cb-16b6-4847-a864-3cf9ec17204f is in state SUCCESS 2025-06-05 19:46:57.252347 | orchestrator | 2025-06-05 19:46:57.252395 | orchestrator | 2025-06-05 19:46:57.252485 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-05 19:46:57.252526 | orchestrator | 2025-06-05 19:46:57.252538 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-05 19:46:57.252549 | orchestrator | Thursday 05 June 2025 19:35:50 +0000 (0:00:00.857) 0:00:00.857 ********* 2025-06-05 19:46:57.252562 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.252574 | orchestrator | 2025-06-05 19:46:57.252585 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-05 19:46:57.252596 | orchestrator | Thursday 05 June 2025 19:35:51 +0000 (0:00:01.204) 0:00:02.062 ********* 2025-06-05 19:46:57.252607 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.252688 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.252700 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.252724 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.252735 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.252746 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.252757 | orchestrator | 2025-06-05 19:46:57.252768 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-05 19:46:57.252779 | orchestrator | Thursday 05 June 2025 19:35:53 +0000 (0:00:01.873) 0:00:03.935 ********* 2025-06-05 19:46:57.252790 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.252801 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.252812 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.252850 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.252862 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.252873 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.252884 | orchestrator | 2025-06-05 19:46:57.252895 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-05 19:46:57.252974 | orchestrator | Thursday 05 June 2025 19:35:54 +0000 (0:00:00.771) 0:00:04.707 ********* 2025-06-05 19:46:57.252987 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253044 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253193 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253208 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253220 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253232 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253244 | orchestrator | 2025-06-05 19:46:57.253256 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-05 19:46:57.253269 | orchestrator | Thursday 05 June 2025 19:35:55 +0000 (0:00:00.879) 0:00:05.587 ********* 2025-06-05 19:46:57.253281 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253293 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253305 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253317 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253328 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253339 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253349 | orchestrator | 2025-06-05 19:46:57.253360 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-05 19:46:57.253371 | orchestrator | Thursday 05 June 2025 19:35:55 +0000 (0:00:00.759) 0:00:06.346 ********* 2025-06-05 19:46:57.253382 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253393 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253403 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253414 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253425 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253435 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253446 | orchestrator | 2025-06-05 19:46:57.253457 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-05 19:46:57.253468 | orchestrator | Thursday 05 June 2025 19:35:56 +0000 (0:00:00.641) 0:00:06.988 ********* 2025-06-05 19:46:57.253479 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253489 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253500 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253511 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253531 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253542 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253553 | orchestrator | 2025-06-05 19:46:57.253564 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-05 19:46:57.253575 | orchestrator | Thursday 05 June 2025 19:35:57 +0000 (0:00:00.927) 0:00:07.915 ********* 2025-06-05 19:46:57.253586 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.253598 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.253609 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.253620 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.253630 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.253641 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.253652 | orchestrator | 2025-06-05 19:46:57.253663 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-05 19:46:57.253674 | orchestrator | Thursday 05 June 2025 19:35:58 +0000 (0:00:00.827) 0:00:08.742 ********* 2025-06-05 19:46:57.253685 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253696 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253707 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253717 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253728 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253739 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253749 | orchestrator | 2025-06-05 19:46:57.253761 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-05 19:46:57.253772 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:01.109) 0:00:09.852 ********* 2025-06-05 19:46:57.253783 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:46:57.253794 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.253804 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.253815 | orchestrator | 2025-06-05 19:46:57.253826 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-05 19:46:57.253837 | orchestrator | Thursday 05 June 2025 19:35:59 +0000 (0:00:00.620) 0:00:10.472 ********* 2025-06-05 19:46:57.253848 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.253858 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.253869 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.253880 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.253891 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.253901 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.253912 | orchestrator | 2025-06-05 19:46:57.253937 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-05 19:46:57.253949 | orchestrator | Thursday 05 June 2025 19:36:01 +0000 (0:00:01.121) 0:00:11.593 ********* 2025-06-05 19:46:57.253960 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:46:57.253971 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.253982 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.254063 | orchestrator | 2025-06-05 19:46:57.254079 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-05 19:46:57.254163 | orchestrator | Thursday 05 June 2025 19:36:04 +0000 (0:00:03.106) 0:00:14.700 ********* 2025-06-05 19:46:57.254225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-05 19:46:57.254245 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-05 19:46:57.254257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-05 19:46:57.254268 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.254279 | orchestrator | 2025-06-05 19:46:57.254290 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-05 19:46:57.254301 | orchestrator | Thursday 05 June 2025 19:36:04 +0000 (0:00:00.781) 0:00:15.481 ********* 2025-06-05 19:46:57.254323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254337 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254359 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.254370 | orchestrator | 2025-06-05 19:46:57.254381 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-05 19:46:57.254392 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:01.192) 0:00:16.674 ********* 2025-06-05 19:46:57.254406 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254444 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.254455 | orchestrator | 2025-06-05 19:46:57.254466 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-05 19:46:57.254477 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:00.424) 0:00:17.099 ********* 2025-06-05 19:46:57.254500 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-05 19:36:01.650470', 'end': '2025-06-05 19:36:01.917330', 'delta': '0:00:00.266860', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254521 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-05 19:36:02.972585', 'end': '2025-06-05 19:36:03.243082', 'delta': '0:00:00.270497', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254540 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-05 19:36:03.746511', 'end': '2025-06-05 19:36:04.015387', 'delta': '0:00:00.268876', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.254551 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.254563 | orchestrator | 2025-06-05 19:46:57.254574 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-05 19:46:57.254584 | orchestrator | Thursday 05 June 2025 19:36:06 +0000 (0:00:00.218) 0:00:17.318 ********* 2025-06-05 19:46:57.254596 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.254607 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.254618 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.254628 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.254639 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.254650 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.254661 | orchestrator | 2025-06-05 19:46:57.254672 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-05 19:46:57.254683 | orchestrator | Thursday 05 June 2025 19:36:07 +0000 (0:00:01.000) 0:00:18.318 ********* 2025-06-05 19:46:57.254694 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.254705 | orchestrator | 2025-06-05 19:46:57.254716 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-05 19:46:57.254727 | orchestrator | Thursday 05 June 2025 19:36:08 +0000 (0:00:00.747) 0:00:19.066 ********* 2025-06-05 19:46:57.254738 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.254749 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.254760 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.254771 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.254781 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.254792 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.254803 | orchestrator | 2025-06-05 19:46:57.255090 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-05 19:46:57.255103 | orchestrator | Thursday 05 June 2025 19:36:09 +0000 (0:00:01.378) 0:00:20.444 ********* 2025-06-05 19:46:57.255114 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255125 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255136 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255147 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255158 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255169 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255180 | orchestrator | 2025-06-05 19:46:57.255191 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-05 19:46:57.255201 | orchestrator | Thursday 05 June 2025 19:36:11 +0000 (0:00:01.366) 0:00:21.810 ********* 2025-06-05 19:46:57.255212 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255223 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255234 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255244 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255255 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255266 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255277 | orchestrator | 2025-06-05 19:46:57.255288 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-05 19:46:57.255308 | orchestrator | Thursday 05 June 2025 19:36:12 +0000 (0:00:01.135) 0:00:22.946 ********* 2025-06-05 19:46:57.255319 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255330 | orchestrator | 2025-06-05 19:46:57.255341 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-05 19:46:57.255352 | orchestrator | Thursday 05 June 2025 19:36:12 +0000 (0:00:00.134) 0:00:23.081 ********* 2025-06-05 19:46:57.255363 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255374 | orchestrator | 2025-06-05 19:46:57.255384 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-05 19:46:57.255395 | orchestrator | Thursday 05 June 2025 19:36:12 +0000 (0:00:00.222) 0:00:23.304 ********* 2025-06-05 19:46:57.255406 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255417 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255427 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255492 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255505 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255604 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255614 | orchestrator | 2025-06-05 19:46:57.255632 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-05 19:46:57.255642 | orchestrator | Thursday 05 June 2025 19:36:13 +0000 (0:00:00.883) 0:00:24.187 ********* 2025-06-05 19:46:57.255652 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255662 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255672 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255682 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255691 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255701 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255711 | orchestrator | 2025-06-05 19:46:57.255720 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-05 19:46:57.255730 | orchestrator | Thursday 05 June 2025 19:36:14 +0000 (0:00:00.917) 0:00:25.105 ********* 2025-06-05 19:46:57.255764 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255774 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255784 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255794 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255809 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255819 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255829 | orchestrator | 2025-06-05 19:46:57.255839 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-05 19:46:57.255848 | orchestrator | Thursday 05 June 2025 19:36:15 +0000 (0:00:00.929) 0:00:26.034 ********* 2025-06-05 19:46:57.255858 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.255867 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.255877 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.255886 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.255896 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.255935 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.255945 | orchestrator | 2025-06-05 19:46:57.255980 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-05 19:46:57.256006 | orchestrator | Thursday 05 June 2025 19:36:16 +0000 (0:00:01.165) 0:00:27.200 ********* 2025-06-05 19:46:57.256017 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.256027 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.256066 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.256077 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.256087 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.256097 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.256106 | orchestrator | 2025-06-05 19:46:57.256116 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-05 19:46:57.256126 | orchestrator | Thursday 05 June 2025 19:36:17 +0000 (0:00:00.532) 0:00:27.733 ********* 2025-06-05 19:46:57.256135 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.256152 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.256162 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.256172 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.256181 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.256191 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.256200 | orchestrator | 2025-06-05 19:46:57.256243 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-05 19:46:57.256254 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:01.005) 0:00:28.738 ********* 2025-06-05 19:46:57.256264 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.256273 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.256283 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.256293 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.256303 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.256312 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.256322 | orchestrator | 2025-06-05 19:46:57.256332 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-05 19:46:57.256342 | orchestrator | Thursday 05 June 2025 19:36:18 +0000 (0:00:00.654) 0:00:29.393 ********* 2025-06-05 19:46:57.256353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a', 'dm-uuid-LVM-FJ6uKlHNSGbth2KF4rcsOp5SwCqZZXFqbp7EUNPk6nYvKjRXReNrgdrbcAUP75wR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256366 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf', 'dm-uuid-LVM-usj4PKAAo7bOTqcq2VpJyf3PYfNjK0vPdWUiYN0Pt9egly7bK34oCoFUCm1EK2VC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256469 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98', 'dm-uuid-LVM-01YdIcfs11p9JZGMgWn4H0UfDM053J43W4fFVzwIIS33LHdgeBRcb5dnpEhaGTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91', 'dm-uuid-LVM-rvVAKQOoNa1LDt85v5BCuQy3xGeCyndFeELQHQeC95k9Fy3dyt3JCS3tvdhusiwj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s5weHx-Yzfl-woeC-3VoH-1mHe-YyQa-EkSXvM', 'scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312', 'scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256643 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mh3H5D-yLz2-Hszp-mjP8-JYsP-T18X-vrUN4o', 'scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441', 'scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766', 'scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256799 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256809 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.256819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256829 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58', 'dm-uuid-LVM-AiQgXeMLqwZPZJwmmYyGvDG90hw0rDujSvclsUp4cC2cb5gI9Wp0oYIdoOTdvnOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vdHksL-63Rz-dpTc-XuGp-uG8Q-nqpa-Y1fNNT', 'scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25', 'scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5', 'dm-uuid-LVM-gW9yOshmp7eJcBlMCjxQnlZcdEM46DH6RosfnoVrZ7wDAoSEBYV30R3YJoU72UMm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dL8QL9-QnSn-0KK1-A11Q-6XKs-ilZ5-3a2xD2', 'scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f', 'scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.256978 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.256989 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2', 'scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257127 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257332 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257365 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TIo4v0-W8eY-u89J-pG5y-1hqb-Dcid-h2BAEN', 'scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8', 'scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6njWn-NxbI-PHJ9-m1Zl-AMQA-1JTZ-JrFVPO', 'scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e', 'scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257398 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.257407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b', 'scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257619 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.257627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257636 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.257644 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.257652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:46:57.257733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part1', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part14', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part15', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part16', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:46:57.257760 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.257769 | orchestrator | 2025-06-05 19:46:57.257777 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-05 19:46:57.257785 | orchestrator | Thursday 05 June 2025 19:36:20 +0000 (0:00:01.641) 0:00:31.034 ********* 2025-06-05 19:46:57.257798 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a', 'dm-uuid-LVM-FJ6uKlHNSGbth2KF4rcsOp5SwCqZZXFqbp7EUNPk6nYvKjRXReNrgdrbcAUP75wR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf', 'dm-uuid-LVM-usj4PKAAo7bOTqcq2VpJyf3PYfNjK0vPdWUiYN0Pt9egly7bK34oCoFUCm1EK2VC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257816 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257824 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98', 'dm-uuid-LVM-01YdIcfs11p9JZGMgWn4H0UfDM053J43W4fFVzwIIS33LHdgeBRcb5dnpEhaGTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257842 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91', 'dm-uuid-LVM-rvVAKQOoNa1LDt85v5BCuQy3xGeCyndFeELQHQeC95k9Fy3dyt3JCS3tvdhusiwj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257851 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257872 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257881 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257889 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257902 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257929 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257946 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.257975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258009 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258059 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258912 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s5weHx-Yzfl-woeC-3VoH-1mHe-YyQa-EkSXvM', 'scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312', 'scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58', 'dm-uuid-LVM-AiQgXeMLqwZPZJwmmYyGvDG90hw0rDujSvclsUp4cC2cb5gI9Wp0oYIdoOTdvnOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.258977 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vdHksL-63Rz-dpTc-XuGp-uG8Q-nqpa-Y1fNNT', 'scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25', 'scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mh3H5D-yLz2-Hszp-mjP8-JYsP-T18X-vrUN4o', 'scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441', 'scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259021 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766', 'scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259030 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dL8QL9-QnSn-0KK1-A11Q-6XKs-ilZ5-3a2xD2', 'scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f', 'scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5', 'dm-uuid-LVM-gW9yOshmp7eJcBlMCjxQnlZcdEM46DH6RosfnoVrZ7wDAoSEBYV30R3YJoU72UMm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259062 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259076 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2', 'scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259084 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.259093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259117 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259125 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259138 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259147 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259164 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259178 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259229 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259243 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259256 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259264 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259293 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part1', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part14', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part15', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part16', 'scsi-SQEMU_QEMU_HARDDISK_3640e00a-7211-4496-a331-9499d5efe8aa-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259307 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-18-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259344 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259357 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TIo4v0-W8eY-u89J-pG5y-1hqb-Dcid-h2BAEN', 'scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8', 'scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259371 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6njWn-NxbI-PHJ9-m1Zl-AMQA-1JTZ-JrFVPO', 'scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e', 'scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259380 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b', 'scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259388 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.259401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259409 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259422 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259435 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259444 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259460 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259473 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259495 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_38d524cb-058b-4154-b8dc-2ef4d020f5e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259508 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.259518 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-19-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259528 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.259537 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.259551 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259568 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259583 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259592 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259602 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259611 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259625 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259638 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259654 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part1', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part14', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part15', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part16', 'scsi-SQEMU_QEMU_HARDDISK_077eafe1-9404-44ab-9d2f-e62cd06db711-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259665 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:46:57.259674 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.259683 | orchestrator | 2025-06-05 19:46:57.259693 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-05 19:46:57.259702 | orchestrator | Thursday 05 June 2025 19:36:22 +0000 (0:00:02.306) 0:00:33.341 ********* 2025-06-05 19:46:57.259715 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.259725 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.259734 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.259743 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.259751 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.259760 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.259768 | orchestrator | 2025-06-05 19:46:57.259778 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-05 19:46:57.259793 | orchestrator | Thursday 05 June 2025 19:36:23 +0000 (0:00:01.116) 0:00:34.457 ********* 2025-06-05 19:46:57.259802 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.259811 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.259820 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.259829 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.259838 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.259846 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.259855 | orchestrator | 2025-06-05 19:46:57.259865 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-05 19:46:57.259876 | orchestrator | Thursday 05 June 2025 19:36:24 +0000 (0:00:01.068) 0:00:35.525 ********* 2025-06-05 19:46:57.259885 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.259893 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.259900 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.259908 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.259916 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.259924 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.259932 | orchestrator | 2025-06-05 19:46:57.259940 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-05 19:46:57.259948 | orchestrator | Thursday 05 June 2025 19:36:25 +0000 (0:00:00.749) 0:00:36.275 ********* 2025-06-05 19:46:57.259956 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.259964 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.259971 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.259979 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.259987 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.260048 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.260057 | orchestrator | 2025-06-05 19:46:57.260065 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-05 19:46:57.260073 | orchestrator | Thursday 05 June 2025 19:36:26 +0000 (0:00:00.493) 0:00:36.769 ********* 2025-06-05 19:46:57.260081 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260089 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260097 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260104 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.260112 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.260120 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.260128 | orchestrator | 2025-06-05 19:46:57.260136 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-05 19:46:57.260144 | orchestrator | Thursday 05 June 2025 19:36:27 +0000 (0:00:00.922) 0:00:37.691 ********* 2025-06-05 19:46:57.260151 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260159 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260167 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260175 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.260183 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.260191 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.260198 | orchestrator | 2025-06-05 19:46:57.260206 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-05 19:46:57.260214 | orchestrator | Thursday 05 June 2025 19:36:27 +0000 (0:00:00.740) 0:00:38.432 ********* 2025-06-05 19:46:57.260222 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-05 19:46:57.260229 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-05 19:46:57.260235 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-05 19:46:57.260242 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-05 19:46:57.260249 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-05 19:46:57.260255 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-05 19:46:57.260262 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-05 19:46:57.260268 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-05 19:46:57.260275 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-05 19:46:57.260287 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-05 19:46:57.260294 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-05 19:46:57.260300 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-05 19:46:57.260307 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-05 19:46:57.260313 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-05 19:46:57.260320 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-05 19:46:57.260326 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-05 19:46:57.260333 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-05 19:46:57.260340 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-05 19:46:57.260346 | orchestrator | 2025-06-05 19:46:57.260353 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-05 19:46:57.260360 | orchestrator | Thursday 05 June 2025 19:36:32 +0000 (0:00:04.670) 0:00:43.102 ********* 2025-06-05 19:46:57.260366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-05 19:46:57.260373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-05 19:46:57.260380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-05 19:46:57.260387 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260393 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-05 19:46:57.260400 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-05 19:46:57.260407 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-05 19:46:57.260414 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260420 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-05 19:46:57.260427 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-05 19:46:57.260438 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-05 19:46:57.260445 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260452 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:46:57.260459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:46:57.260465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:46:57.260472 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.260479 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-05 19:46:57.260486 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-05 19:46:57.260492 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-05 19:46:57.260499 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-05 19:46:57.260505 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.260512 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-05 19:46:57.260523 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-05 19:46:57.260529 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.260536 | orchestrator | 2025-06-05 19:46:57.260543 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-05 19:46:57.260550 | orchestrator | Thursday 05 June 2025 19:36:33 +0000 (0:00:00.661) 0:00:43.763 ********* 2025-06-05 19:46:57.260556 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.260563 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.260570 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.260577 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.260583 | orchestrator | 2025-06-05 19:46:57.260590 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-05 19:46:57.260597 | orchestrator | Thursday 05 June 2025 19:36:34 +0000 (0:00:01.273) 0:00:45.037 ********* 2025-06-05 19:46:57.260604 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260616 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260622 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260629 | orchestrator | 2025-06-05 19:46:57.260636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-05 19:46:57.260642 | orchestrator | Thursday 05 June 2025 19:36:34 +0000 (0:00:00.307) 0:00:45.344 ********* 2025-06-05 19:46:57.260649 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260656 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260662 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260669 | orchestrator | 2025-06-05 19:46:57.260675 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-05 19:46:57.260682 | orchestrator | Thursday 05 June 2025 19:36:35 +0000 (0:00:00.804) 0:00:46.148 ********* 2025-06-05 19:46:57.260689 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260695 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.260702 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.260709 | orchestrator | 2025-06-05 19:46:57.260715 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-05 19:46:57.260722 | orchestrator | Thursday 05 June 2025 19:36:36 +0000 (0:00:00.430) 0:00:46.578 ********* 2025-06-05 19:46:57.260728 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.260735 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.260742 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.260749 | orchestrator | 2025-06-05 19:46:57.260755 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-05 19:46:57.260762 | orchestrator | Thursday 05 June 2025 19:36:36 +0000 (0:00:00.547) 0:00:47.126 ********* 2025-06-05 19:46:57.260768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.260775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.260782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.260789 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260795 | orchestrator | 2025-06-05 19:46:57.260802 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-05 19:46:57.260808 | orchestrator | Thursday 05 June 2025 19:36:36 +0000 (0:00:00.345) 0:00:47.471 ********* 2025-06-05 19:46:57.260815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.260822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.260828 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.260835 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260842 | orchestrator | 2025-06-05 19:46:57.260848 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-05 19:46:57.260855 | orchestrator | Thursday 05 June 2025 19:36:37 +0000 (0:00:00.437) 0:00:47.909 ********* 2025-06-05 19:46:57.260862 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.260868 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.260875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.260882 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.260888 | orchestrator | 2025-06-05 19:46:57.260895 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-05 19:46:57.260902 | orchestrator | Thursday 05 June 2025 19:36:38 +0000 (0:00:00.771) 0:00:48.681 ********* 2025-06-05 19:46:57.260908 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.260915 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.260922 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.260928 | orchestrator | 2025-06-05 19:46:57.260935 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-05 19:46:57.260942 | orchestrator | Thursday 05 June 2025 19:36:39 +0000 (0:00:00.919) 0:00:49.601 ********* 2025-06-05 19:46:57.260948 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-05 19:46:57.260955 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-05 19:46:57.260966 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-05 19:46:57.260973 | orchestrator | 2025-06-05 19:46:57.260983 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-05 19:46:57.261006 | orchestrator | Thursday 05 June 2025 19:36:40 +0000 (0:00:01.263) 0:00:50.865 ********* 2025-06-05 19:46:57.261014 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:46:57.261021 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.261028 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.261034 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-05 19:46:57.261041 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-05 19:46:57.261048 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-05 19:46:57.261057 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-05 19:46:57.261064 | orchestrator | 2025-06-05 19:46:57.261071 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-05 19:46:57.261078 | orchestrator | Thursday 05 June 2025 19:36:41 +0000 (0:00:00.771) 0:00:51.637 ********* 2025-06-05 19:46:57.261084 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:46:57.261091 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.261097 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.261104 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-05 19:46:57.261110 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-05 19:46:57.261117 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-05 19:46:57.261124 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-05 19:46:57.261130 | orchestrator | 2025-06-05 19:46:57.261137 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.261144 | orchestrator | Thursday 05 June 2025 19:36:43 +0000 (0:00:02.653) 0:00:54.291 ********* 2025-06-05 19:46:57.261150 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.261157 | orchestrator | 2025-06-05 19:46:57.261164 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.261171 | orchestrator | Thursday 05 June 2025 19:36:44 +0000 (0:00:01.090) 0:00:55.381 ********* 2025-06-05 19:46:57.261177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.261184 | orchestrator | 2025-06-05 19:46:57.261190 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.261197 | orchestrator | Thursday 05 June 2025 19:36:46 +0000 (0:00:01.263) 0:00:56.644 ********* 2025-06-05 19:46:57.261204 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261210 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261217 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261223 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.261230 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.261238 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.261250 | orchestrator | 2025-06-05 19:46:57.261257 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.261264 | orchestrator | Thursday 05 June 2025 19:36:47 +0000 (0:00:01.206) 0:00:57.850 ********* 2025-06-05 19:46:57.261270 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261277 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261288 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261295 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261302 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261309 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261316 | orchestrator | 2025-06-05 19:46:57.261322 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.261329 | orchestrator | Thursday 05 June 2025 19:36:48 +0000 (0:00:00.996) 0:00:58.847 ********* 2025-06-05 19:46:57.261336 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261343 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261349 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261356 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261363 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261370 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261376 | orchestrator | 2025-06-05 19:46:57.261383 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.261390 | orchestrator | Thursday 05 June 2025 19:36:49 +0000 (0:00:01.074) 0:00:59.921 ********* 2025-06-05 19:46:57.261397 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261403 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261410 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261417 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261424 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261430 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261437 | orchestrator | 2025-06-05 19:46:57.261444 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.261451 | orchestrator | Thursday 05 June 2025 19:36:50 +0000 (0:00:00.761) 0:01:00.683 ********* 2025-06-05 19:46:57.261458 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261464 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261471 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261478 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.261485 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.261491 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.261498 | orchestrator | 2025-06-05 19:46:57.261505 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.261515 | orchestrator | Thursday 05 June 2025 19:36:51 +0000 (0:00:01.231) 0:01:01.914 ********* 2025-06-05 19:46:57.261522 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261529 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261536 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261542 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261549 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261556 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261562 | orchestrator | 2025-06-05 19:46:57.261569 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.261576 | orchestrator | Thursday 05 June 2025 19:36:52 +0000 (0:00:00.640) 0:01:02.554 ********* 2025-06-05 19:46:57.261583 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261589 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261596 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261603 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261609 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261620 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261626 | orchestrator | 2025-06-05 19:46:57.261633 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.261640 | orchestrator | Thursday 05 June 2025 19:36:52 +0000 (0:00:00.826) 0:01:03.380 ********* 2025-06-05 19:46:57.261647 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261653 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261660 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261667 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.261674 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.261681 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.261692 | orchestrator | 2025-06-05 19:46:57.261699 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.261705 | orchestrator | Thursday 05 June 2025 19:36:53 +0000 (0:00:01.037) 0:01:04.418 ********* 2025-06-05 19:46:57.261712 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261719 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261726 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261732 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.261739 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.261745 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.261752 | orchestrator | 2025-06-05 19:46:57.261759 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.261766 | orchestrator | Thursday 05 June 2025 19:36:55 +0000 (0:00:01.499) 0:01:05.917 ********* 2025-06-05 19:46:57.261773 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261780 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261786 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261793 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261800 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261807 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261813 | orchestrator | 2025-06-05 19:46:57.261820 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.261827 | orchestrator | Thursday 05 June 2025 19:36:55 +0000 (0:00:00.495) 0:01:06.413 ********* 2025-06-05 19:46:57.261834 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.261840 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.261847 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.261854 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.261860 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.261867 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.261874 | orchestrator | 2025-06-05 19:46:57.261881 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.261888 | orchestrator | Thursday 05 June 2025 19:36:56 +0000 (0:00:00.687) 0:01:07.100 ********* 2025-06-05 19:46:57.261894 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261901 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261908 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261914 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261921 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261928 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.261935 | orchestrator | 2025-06-05 19:46:57.261942 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.261949 | orchestrator | Thursday 05 June 2025 19:36:57 +0000 (0:00:00.639) 0:01:07.740 ********* 2025-06-05 19:46:57.261955 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.261962 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.261969 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.261975 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.261982 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.261989 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262068 | orchestrator | 2025-06-05 19:46:57.262079 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.262086 | orchestrator | Thursday 05 June 2025 19:36:58 +0000 (0:00:01.014) 0:01:08.755 ********* 2025-06-05 19:46:57.262093 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.262100 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.262106 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.262113 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262120 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262127 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262133 | orchestrator | 2025-06-05 19:46:57.262140 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.262147 | orchestrator | Thursday 05 June 2025 19:36:58 +0000 (0:00:00.612) 0:01:09.367 ********* 2025-06-05 19:46:57.262153 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262165 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262172 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262179 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262186 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262192 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262199 | orchestrator | 2025-06-05 19:46:57.262206 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.262213 | orchestrator | Thursday 05 June 2025 19:36:59 +0000 (0:00:00.897) 0:01:10.265 ********* 2025-06-05 19:46:57.262219 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262226 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262232 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262239 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262246 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262252 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262259 | orchestrator | 2025-06-05 19:46:57.262271 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.262278 | orchestrator | Thursday 05 June 2025 19:37:00 +0000 (0:00:00.888) 0:01:11.153 ********* 2025-06-05 19:46:57.262285 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262291 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262298 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262304 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.262311 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.262318 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.262324 | orchestrator | 2025-06-05 19:46:57.262331 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.262338 | orchestrator | Thursday 05 June 2025 19:37:01 +0000 (0:00:00.821) 0:01:11.975 ********* 2025-06-05 19:46:57.262345 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.262351 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.262358 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.262365 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.262371 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.262384 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.262391 | orchestrator | 2025-06-05 19:46:57.262398 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.262405 | orchestrator | Thursday 05 June 2025 19:37:02 +0000 (0:00:00.722) 0:01:12.698 ********* 2025-06-05 19:46:57.262411 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.262418 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.262424 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.262431 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.262438 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.262444 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.262451 | orchestrator | 2025-06-05 19:46:57.262458 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-05 19:46:57.262464 | orchestrator | Thursday 05 June 2025 19:37:03 +0000 (0:00:01.160) 0:01:13.858 ********* 2025-06-05 19:46:57.262471 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.262478 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.262485 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.262491 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.262498 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.262505 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.262511 | orchestrator | 2025-06-05 19:46:57.262518 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-05 19:46:57.262525 | orchestrator | Thursday 05 June 2025 19:37:04 +0000 (0:00:01.521) 0:01:15.380 ********* 2025-06-05 19:46:57.262531 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.262538 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.262545 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.262551 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.262558 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.262569 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.262576 | orchestrator | 2025-06-05 19:46:57.262582 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-05 19:46:57.262589 | orchestrator | Thursday 05 June 2025 19:37:06 +0000 (0:00:01.748) 0:01:17.128 ********* 2025-06-05 19:46:57.262596 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.262603 | orchestrator | 2025-06-05 19:46:57.262609 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-05 19:46:57.262616 | orchestrator | Thursday 05 June 2025 19:37:07 +0000 (0:00:01.114) 0:01:18.242 ********* 2025-06-05 19:46:57.262623 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262629 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262636 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262643 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262649 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262656 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262663 | orchestrator | 2025-06-05 19:46:57.262669 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-05 19:46:57.262676 | orchestrator | Thursday 05 June 2025 19:37:08 +0000 (0:00:00.763) 0:01:19.006 ********* 2025-06-05 19:46:57.262683 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262689 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262696 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262703 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262709 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262716 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262723 | orchestrator | 2025-06-05 19:46:57.262729 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-05 19:46:57.262736 | orchestrator | Thursday 05 June 2025 19:37:08 +0000 (0:00:00.529) 0:01:19.536 ********* 2025-06-05 19:46:57.262743 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262749 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262756 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262763 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262769 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262776 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262783 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262789 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-05 19:46:57.262796 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262803 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262810 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262827 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-05 19:46:57.262834 | orchestrator | 2025-06-05 19:46:57.262841 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-05 19:46:57.262848 | orchestrator | Thursday 05 June 2025 19:37:10 +0000 (0:00:01.442) 0:01:20.979 ********* 2025-06-05 19:46:57.262854 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.262861 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.262868 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.262874 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.262881 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.262888 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.262900 | orchestrator | 2025-06-05 19:46:57.262906 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-05 19:46:57.262913 | orchestrator | Thursday 05 June 2025 19:37:11 +0000 (0:00:01.013) 0:01:21.992 ********* 2025-06-05 19:46:57.262920 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.262931 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.262937 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.262944 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.262951 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.262957 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.262964 | orchestrator | 2025-06-05 19:46:57.262971 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-05 19:46:57.262977 | orchestrator | Thursday 05 June 2025 19:37:12 +0000 (0:00:00.811) 0:01:22.804 ********* 2025-06-05 19:46:57.262984 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263006 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263014 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263021 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263028 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263034 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263041 | orchestrator | 2025-06-05 19:46:57.263048 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-05 19:46:57.263054 | orchestrator | Thursday 05 June 2025 19:37:12 +0000 (0:00:00.627) 0:01:23.432 ********* 2025-06-05 19:46:57.263061 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263092 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263099 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263106 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263112 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263119 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263126 | orchestrator | 2025-06-05 19:46:57.263132 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-05 19:46:57.263139 | orchestrator | Thursday 05 June 2025 19:37:13 +0000 (0:00:00.955) 0:01:24.388 ********* 2025-06-05 19:46:57.263146 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.263153 | orchestrator | 2025-06-05 19:46:57.263160 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-05 19:46:57.263166 | orchestrator | Thursday 05 June 2025 19:37:15 +0000 (0:00:01.258) 0:01:25.646 ********* 2025-06-05 19:46:57.263173 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.263180 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.263187 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.263194 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.263200 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.263207 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.263214 | orchestrator | 2025-06-05 19:46:57.263220 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-05 19:46:57.263227 | orchestrator | Thursday 05 June 2025 19:38:25 +0000 (0:01:09.996) 0:02:35.643 ********* 2025-06-05 19:46:57.263234 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263240 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263247 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263254 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263261 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263267 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263274 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263281 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263296 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263303 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263310 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263317 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263323 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263330 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263337 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263343 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263350 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263357 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263364 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263370 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263377 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-05 19:46:57.263388 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-05 19:46:57.263395 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-05 19:46:57.263402 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263409 | orchestrator | 2025-06-05 19:46:57.263416 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-05 19:46:57.263422 | orchestrator | Thursday 05 June 2025 19:38:25 +0000 (0:00:00.757) 0:02:36.401 ********* 2025-06-05 19:46:57.263429 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263436 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263443 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263449 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263456 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263463 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263470 | orchestrator | 2025-06-05 19:46:57.263476 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-05 19:46:57.263483 | orchestrator | Thursday 05 June 2025 19:38:26 +0000 (0:00:00.699) 0:02:37.101 ********* 2025-06-05 19:46:57.263494 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263501 | orchestrator | 2025-06-05 19:46:57.263508 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-05 19:46:57.263514 | orchestrator | Thursday 05 June 2025 19:38:26 +0000 (0:00:00.223) 0:02:37.324 ********* 2025-06-05 19:46:57.263521 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263528 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263534 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263541 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263548 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263554 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263561 | orchestrator | 2025-06-05 19:46:57.263568 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-05 19:46:57.263574 | orchestrator | Thursday 05 June 2025 19:38:27 +0000 (0:00:00.862) 0:02:38.186 ********* 2025-06-05 19:46:57.263581 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263588 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263595 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263601 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263608 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263615 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263621 | orchestrator | 2025-06-05 19:46:57.263628 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-05 19:46:57.263635 | orchestrator | Thursday 05 June 2025 19:38:28 +0000 (0:00:00.772) 0:02:38.958 ********* 2025-06-05 19:46:57.263649 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263656 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263663 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263670 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263676 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263683 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263689 | orchestrator | 2025-06-05 19:46:57.263696 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-05 19:46:57.263703 | orchestrator | Thursday 05 June 2025 19:38:29 +0000 (0:00:00.784) 0:02:39.743 ********* 2025-06-05 19:46:57.263710 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.263717 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.263723 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.263730 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.263737 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.263744 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.263750 | orchestrator | 2025-06-05 19:46:57.263757 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-05 19:46:57.263764 | orchestrator | Thursday 05 June 2025 19:38:31 +0000 (0:00:02.055) 0:02:41.799 ********* 2025-06-05 19:46:57.263770 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.263777 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.263784 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.263790 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.263797 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.263804 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.263810 | orchestrator | 2025-06-05 19:46:57.263817 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-05 19:46:57.263824 | orchestrator | Thursday 05 June 2025 19:38:31 +0000 (0:00:00.686) 0:02:42.485 ********* 2025-06-05 19:46:57.263831 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.263838 | orchestrator | 2025-06-05 19:46:57.263845 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-05 19:46:57.263852 | orchestrator | Thursday 05 June 2025 19:38:33 +0000 (0:00:01.083) 0:02:43.568 ********* 2025-06-05 19:46:57.263858 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263865 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263872 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263878 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263885 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263892 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263899 | orchestrator | 2025-06-05 19:46:57.263905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-05 19:46:57.263912 | orchestrator | Thursday 05 June 2025 19:38:33 +0000 (0:00:00.644) 0:02:44.213 ********* 2025-06-05 19:46:57.263919 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263926 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.263932 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.263939 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.263946 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.263952 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.263959 | orchestrator | 2025-06-05 19:46:57.263966 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-05 19:46:57.263973 | orchestrator | Thursday 05 June 2025 19:38:34 +0000 (0:00:00.882) 0:02:45.096 ********* 2025-06-05 19:46:57.263979 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.263986 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264008 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264015 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264022 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264033 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264046 | orchestrator | 2025-06-05 19:46:57.264053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-05 19:46:57.264059 | orchestrator | Thursday 05 June 2025 19:38:35 +0000 (0:00:00.692) 0:02:45.788 ********* 2025-06-05 19:46:57.264066 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.264073 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264080 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264086 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264093 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264100 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264106 | orchestrator | 2025-06-05 19:46:57.264113 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-05 19:46:57.264120 | orchestrator | Thursday 05 June 2025 19:38:36 +0000 (0:00:00.795) 0:02:46.584 ********* 2025-06-05 19:46:57.264127 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.264133 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264140 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264150 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264157 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264164 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264170 | orchestrator | 2025-06-05 19:46:57.264177 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-05 19:46:57.264184 | orchestrator | Thursday 05 June 2025 19:38:36 +0000 (0:00:00.590) 0:02:47.175 ********* 2025-06-05 19:46:57.264191 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.264197 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264204 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264211 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264217 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264224 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264231 | orchestrator | 2025-06-05 19:46:57.264237 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-05 19:46:57.264245 | orchestrator | Thursday 05 June 2025 19:38:37 +0000 (0:00:00.702) 0:02:47.877 ********* 2025-06-05 19:46:57.264251 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.264258 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264264 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264271 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264278 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264285 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264291 | orchestrator | 2025-06-05 19:46:57.264298 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-05 19:46:57.264305 | orchestrator | Thursday 05 June 2025 19:38:38 +0000 (0:00:00.692) 0:02:48.569 ********* 2025-06-05 19:46:57.264312 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.264318 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.264325 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.264332 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.264338 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.264345 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.264352 | orchestrator | 2025-06-05 19:46:57.264359 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-05 19:46:57.264365 | orchestrator | Thursday 05 June 2025 19:38:39 +0000 (0:00:01.095) 0:02:49.665 ********* 2025-06-05 19:46:57.264372 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.264379 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.264385 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.264392 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.264399 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.264406 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.264412 | orchestrator | 2025-06-05 19:46:57.264419 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-05 19:46:57.264426 | orchestrator | Thursday 05 June 2025 19:38:40 +0000 (0:00:01.274) 0:02:50.939 ********* 2025-06-05 19:46:57.264437 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.264444 | orchestrator | 2025-06-05 19:46:57.264451 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-05 19:46:57.264458 | orchestrator | Thursday 05 June 2025 19:38:41 +0000 (0:00:00.959) 0:02:51.899 ********* 2025-06-05 19:46:57.264464 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-05 19:46:57.264471 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-05 19:46:57.264478 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-05 19:46:57.264485 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264491 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-05 19:46:57.264498 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264505 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-05 19:46:57.264511 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264518 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264525 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-05 19:46:57.264532 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264538 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264545 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264558 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264565 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-05 19:46:57.264572 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264578 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264585 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264592 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264603 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264610 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-05 19:46:57.264617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264623 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264630 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264637 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264644 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264650 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-05 19:46:57.264657 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264664 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264670 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264681 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264688 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264694 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-05 19:46:57.264701 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264708 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264714 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264721 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264728 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264739 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-05 19:46:57.264746 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264752 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264759 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264766 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264772 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-05 19:46:57.264786 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264793 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264799 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264806 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264812 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264819 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264826 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-05 19:46:57.264833 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264839 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264846 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264853 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264859 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264866 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-05 19:46:57.264873 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264880 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264886 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.264893 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264900 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.264906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-05 19:46:57.264913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.264920 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.264927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264933 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264940 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.264947 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-05 19:46:57.264953 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.264960 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.264967 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.264973 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.264980 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.264987 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-05 19:46:57.265035 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.265043 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-05 19:46:57.265055 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.265062 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-05 19:46:57.265069 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-05 19:46:57.265075 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.265082 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-05 19:46:57.265089 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.265095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-05 19:46:57.265102 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-05 19:46:57.265109 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-05 19:46:57.265119 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.265126 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-05 19:46:57.265133 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-05 19:46:57.265139 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-05 19:46:57.265146 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-05 19:46:57.265153 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-05 19:46:57.265160 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-05 19:46:57.265166 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-05 19:46:57.265173 | orchestrator | 2025-06-05 19:46:57.265180 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-05 19:46:57.265187 | orchestrator | Thursday 05 June 2025 19:38:48 +0000 (0:00:07.227) 0:02:59.126 ********* 2025-06-05 19:46:57.265194 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265200 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265207 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265214 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.265221 | orchestrator | 2025-06-05 19:46:57.265227 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-05 19:46:57.265234 | orchestrator | Thursday 05 June 2025 19:38:49 +0000 (0:00:00.878) 0:03:00.004 ********* 2025-06-05 19:46:57.265241 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265248 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265255 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265262 | orchestrator | 2025-06-05 19:46:57.265268 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-05 19:46:57.265275 | orchestrator | Thursday 05 June 2025 19:38:50 +0000 (0:00:00.677) 0:03:00.682 ********* 2025-06-05 19:46:57.265282 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265289 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265296 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.265302 | orchestrator | 2025-06-05 19:46:57.265309 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-05 19:46:57.265316 | orchestrator | Thursday 05 June 2025 19:38:51 +0000 (0:00:01.300) 0:03:01.982 ********* 2025-06-05 19:46:57.265323 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.265334 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.265341 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.265347 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265354 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265361 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265367 | orchestrator | 2025-06-05 19:46:57.265374 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-05 19:46:57.265381 | orchestrator | Thursday 05 June 2025 19:38:52 +0000 (0:00:00.574) 0:03:02.556 ********* 2025-06-05 19:46:57.265388 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.265394 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.265401 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.265408 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265415 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265421 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265428 | orchestrator | 2025-06-05 19:46:57.265435 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-05 19:46:57.265441 | orchestrator | Thursday 05 June 2025 19:38:52 +0000 (0:00:00.693) 0:03:03.250 ********* 2025-06-05 19:46:57.265448 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265455 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265461 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265468 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265475 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265482 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265488 | orchestrator | 2025-06-05 19:46:57.265495 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-05 19:46:57.265502 | orchestrator | Thursday 05 June 2025 19:38:53 +0000 (0:00:00.525) 0:03:03.775 ********* 2025-06-05 19:46:57.265513 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265520 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265527 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265533 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265540 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265546 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265553 | orchestrator | 2025-06-05 19:46:57.265559 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-05 19:46:57.265566 | orchestrator | Thursday 05 June 2025 19:38:53 +0000 (0:00:00.592) 0:03:04.368 ********* 2025-06-05 19:46:57.265572 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265578 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265584 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265591 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265597 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265603 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265609 | orchestrator | 2025-06-05 19:46:57.265621 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-05 19:46:57.265628 | orchestrator | Thursday 05 June 2025 19:38:54 +0000 (0:00:00.437) 0:03:04.806 ********* 2025-06-05 19:46:57.265634 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265641 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265647 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265653 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265659 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265666 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265672 | orchestrator | 2025-06-05 19:46:57.265678 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-05 19:46:57.265685 | orchestrator | Thursday 05 June 2025 19:38:54 +0000 (0:00:00.614) 0:03:05.421 ********* 2025-06-05 19:46:57.265691 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265697 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265703 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265714 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265720 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265727 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265733 | orchestrator | 2025-06-05 19:46:57.265739 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-05 19:46:57.265746 | orchestrator | Thursday 05 June 2025 19:38:55 +0000 (0:00:00.602) 0:03:06.023 ********* 2025-06-05 19:46:57.265752 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265758 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.265764 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.265770 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265776 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265783 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265789 | orchestrator | 2025-06-05 19:46:57.265795 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-05 19:46:57.265801 | orchestrator | Thursday 05 June 2025 19:38:56 +0000 (0:00:00.650) 0:03:06.673 ********* 2025-06-05 19:46:57.265808 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265814 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265820 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265827 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.265833 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.265839 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.265845 | orchestrator | 2025-06-05 19:46:57.265852 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-05 19:46:57.265858 | orchestrator | Thursday 05 June 2025 19:38:59 +0000 (0:00:03.403) 0:03:10.077 ********* 2025-06-05 19:46:57.265865 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.265871 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.265877 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.265883 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265890 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265896 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265902 | orchestrator | 2025-06-05 19:46:57.265908 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-05 19:46:57.265915 | orchestrator | Thursday 05 June 2025 19:39:00 +0000 (0:00:00.707) 0:03:10.785 ********* 2025-06-05 19:46:57.265921 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.265928 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.265934 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.265940 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.265946 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.265953 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.265959 | orchestrator | 2025-06-05 19:46:57.265965 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-05 19:46:57.265971 | orchestrator | Thursday 05 June 2025 19:39:00 +0000 (0:00:00.623) 0:03:11.409 ********* 2025-06-05 19:46:57.265978 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.265984 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266003 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266014 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266047 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266057 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266064 | orchestrator | 2025-06-05 19:46:57.266070 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-05 19:46:57.266076 | orchestrator | Thursday 05 June 2025 19:39:01 +0000 (0:00:00.670) 0:03:12.080 ********* 2025-06-05 19:46:57.266083 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.266089 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.266096 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.266108 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266114 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266120 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266127 | orchestrator | 2025-06-05 19:46:57.266137 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-05 19:46:57.266144 | orchestrator | Thursday 05 June 2025 19:39:02 +0000 (0:00:00.525) 0:03:12.606 ********* 2025-06-05 19:46:57.266151 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-05 19:46:57.266163 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-05 19:46:57.266170 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266177 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-05 19:46:57.266184 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-05 19:46:57.266190 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-05 19:46:57.266197 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-05 19:46:57.266203 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266210 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266216 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266222 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266228 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266234 | orchestrator | 2025-06-05 19:46:57.266241 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-05 19:46:57.266247 | orchestrator | Thursday 05 June 2025 19:39:02 +0000 (0:00:00.798) 0:03:13.404 ********* 2025-06-05 19:46:57.266253 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266259 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266265 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266272 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266278 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266284 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266290 | orchestrator | 2025-06-05 19:46:57.266296 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-05 19:46:57.266303 | orchestrator | Thursday 05 June 2025 19:39:03 +0000 (0:00:00.512) 0:03:13.917 ********* 2025-06-05 19:46:57.266309 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266315 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266321 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266331 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266337 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266344 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266350 | orchestrator | 2025-06-05 19:46:57.266356 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-05 19:46:57.266363 | orchestrator | Thursday 05 June 2025 19:39:04 +0000 (0:00:00.787) 0:03:14.704 ********* 2025-06-05 19:46:57.266369 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266375 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266381 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266388 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266394 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266400 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266406 | orchestrator | 2025-06-05 19:46:57.266412 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-05 19:46:57.266419 | orchestrator | Thursday 05 June 2025 19:39:04 +0000 (0:00:00.568) 0:03:15.272 ********* 2025-06-05 19:46:57.266425 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266431 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266437 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266444 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266450 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266457 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266464 | orchestrator | 2025-06-05 19:46:57.266470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-05 19:46:57.266476 | orchestrator | Thursday 05 June 2025 19:39:05 +0000 (0:00:00.692) 0:03:15.965 ********* 2025-06-05 19:46:57.266483 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266492 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.266499 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.266505 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266511 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266518 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266524 | orchestrator | 2025-06-05 19:46:57.266530 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-05 19:46:57.266536 | orchestrator | Thursday 05 June 2025 19:39:06 +0000 (0:00:00.676) 0:03:16.642 ********* 2025-06-05 19:46:57.266542 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.266549 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.266555 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.266561 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266568 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266574 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266580 | orchestrator | 2025-06-05 19:46:57.266586 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-05 19:46:57.266596 | orchestrator | Thursday 05 June 2025 19:39:06 +0000 (0:00:00.845) 0:03:17.487 ********* 2025-06-05 19:46:57.266602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.266609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.266615 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.266621 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266628 | orchestrator | 2025-06-05 19:46:57.266634 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-05 19:46:57.266640 | orchestrator | Thursday 05 June 2025 19:39:07 +0000 (0:00:00.355) 0:03:17.843 ********* 2025-06-05 19:46:57.266646 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.266653 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.266659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.266665 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266671 | orchestrator | 2025-06-05 19:46:57.266682 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-05 19:46:57.266689 | orchestrator | Thursday 05 June 2025 19:39:07 +0000 (0:00:00.351) 0:03:18.195 ********* 2025-06-05 19:46:57.266695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.266701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.266707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.266713 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266720 | orchestrator | 2025-06-05 19:46:57.266726 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-05 19:46:57.266732 | orchestrator | Thursday 05 June 2025 19:39:07 +0000 (0:00:00.345) 0:03:18.541 ********* 2025-06-05 19:46:57.266738 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.266745 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.266751 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.266757 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266764 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266770 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266776 | orchestrator | 2025-06-05 19:46:57.266783 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-05 19:46:57.266789 | orchestrator | Thursday 05 June 2025 19:39:08 +0000 (0:00:00.589) 0:03:19.130 ********* 2025-06-05 19:46:57.266795 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-05 19:46:57.266801 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-05 19:46:57.266808 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-05 19:46:57.266814 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-05 19:46:57.266820 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.266826 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-05 19:46:57.266832 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.266839 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-05 19:46:57.266845 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.266851 | orchestrator | 2025-06-05 19:46:57.266857 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-05 19:46:57.266864 | orchestrator | Thursday 05 June 2025 19:39:10 +0000 (0:00:01.660) 0:03:20.791 ********* 2025-06-05 19:46:57.266870 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.266876 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.266882 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.266889 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.266895 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.266901 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.266907 | orchestrator | 2025-06-05 19:46:57.266914 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.266920 | orchestrator | Thursday 05 June 2025 19:39:12 +0000 (0:00:02.550) 0:03:23.341 ********* 2025-06-05 19:46:57.266926 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.266933 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.266939 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.266945 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.266951 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.266958 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.266964 | orchestrator | 2025-06-05 19:46:57.266970 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-05 19:46:57.266976 | orchestrator | Thursday 05 June 2025 19:39:14 +0000 (0:00:01.317) 0:03:24.659 ********* 2025-06-05 19:46:57.266983 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.266989 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.267032 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.267039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.267045 | orchestrator | 2025-06-05 19:46:57.267052 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-05 19:46:57.267062 | orchestrator | Thursday 05 June 2025 19:39:15 +0000 (0:00:01.073) 0:03:25.732 ********* 2025-06-05 19:46:57.267069 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.267075 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.267082 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.267088 | orchestrator | 2025-06-05 19:46:57.267098 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-05 19:46:57.267105 | orchestrator | Thursday 05 June 2025 19:39:15 +0000 (0:00:00.346) 0:03:26.078 ********* 2025-06-05 19:46:57.267111 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.267118 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.267124 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.267130 | orchestrator | 2025-06-05 19:46:57.267137 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-05 19:46:57.267143 | orchestrator | Thursday 05 June 2025 19:39:17 +0000 (0:00:01.584) 0:03:27.662 ********* 2025-06-05 19:46:57.267149 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:46:57.267156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:46:57.267162 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:46:57.267168 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.267174 | orchestrator | 2025-06-05 19:46:57.267184 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-05 19:46:57.267190 | orchestrator | Thursday 05 June 2025 19:39:17 +0000 (0:00:00.607) 0:03:28.270 ********* 2025-06-05 19:46:57.267197 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.267203 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.267209 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.267215 | orchestrator | 2025-06-05 19:46:57.267222 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-05 19:46:57.267228 | orchestrator | Thursday 05 June 2025 19:39:18 +0000 (0:00:00.398) 0:03:28.669 ********* 2025-06-05 19:46:57.267234 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.267241 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.267247 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.267253 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.267260 | orchestrator | 2025-06-05 19:46:57.267266 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-05 19:46:57.267272 | orchestrator | Thursday 05 June 2025 19:39:19 +0000 (0:00:00.918) 0:03:29.587 ********* 2025-06-05 19:46:57.267278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.267285 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.267291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.267297 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267303 | orchestrator | 2025-06-05 19:46:57.267309 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-05 19:46:57.267316 | orchestrator | Thursday 05 June 2025 19:39:19 +0000 (0:00:00.292) 0:03:29.879 ********* 2025-06-05 19:46:57.267322 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267331 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.267342 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.267351 | orchestrator | 2025-06-05 19:46:57.267360 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-05 19:46:57.267371 | orchestrator | Thursday 05 June 2025 19:39:19 +0000 (0:00:00.281) 0:03:30.161 ********* 2025-06-05 19:46:57.267380 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267389 | orchestrator | 2025-06-05 19:46:57.267398 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-05 19:46:57.267470 | orchestrator | Thursday 05 June 2025 19:39:19 +0000 (0:00:00.178) 0:03:30.339 ********* 2025-06-05 19:46:57.267478 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267498 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.267509 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.267516 | orchestrator | 2025-06-05 19:46:57.267524 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-05 19:46:57.267531 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:00.244) 0:03:30.584 ********* 2025-06-05 19:46:57.267539 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267548 | orchestrator | 2025-06-05 19:46:57.267555 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-05 19:46:57.267563 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:00.175) 0:03:30.759 ********* 2025-06-05 19:46:57.267571 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267579 | orchestrator | 2025-06-05 19:46:57.267587 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-05 19:46:57.267595 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:00.179) 0:03:30.939 ********* 2025-06-05 19:46:57.267604 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267612 | orchestrator | 2025-06-05 19:46:57.267620 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-05 19:46:57.267629 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:00.255) 0:03:31.194 ********* 2025-06-05 19:46:57.267637 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267646 | orchestrator | 2025-06-05 19:46:57.267655 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-05 19:46:57.267664 | orchestrator | Thursday 05 June 2025 19:39:20 +0000 (0:00:00.189) 0:03:31.384 ********* 2025-06-05 19:46:57.267673 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267679 | orchestrator | 2025-06-05 19:46:57.267684 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-05 19:46:57.267690 | orchestrator | Thursday 05 June 2025 19:39:21 +0000 (0:00:00.205) 0:03:31.590 ********* 2025-06-05 19:46:57.267695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.267701 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.267706 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.267712 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267717 | orchestrator | 2025-06-05 19:46:57.267723 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-05 19:46:57.267728 | orchestrator | Thursday 05 June 2025 19:39:21 +0000 (0:00:00.343) 0:03:31.933 ********* 2025-06-05 19:46:57.267734 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267746 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.267751 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.267757 | orchestrator | 2025-06-05 19:46:57.267762 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-05 19:46:57.267768 | orchestrator | Thursday 05 June 2025 19:39:21 +0000 (0:00:00.271) 0:03:32.205 ********* 2025-06-05 19:46:57.267773 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267779 | orchestrator | 2025-06-05 19:46:57.267784 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-05 19:46:57.267790 | orchestrator | Thursday 05 June 2025 19:39:21 +0000 (0:00:00.193) 0:03:32.398 ********* 2025-06-05 19:46:57.267795 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267801 | orchestrator | 2025-06-05 19:46:57.267806 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-05 19:46:57.267811 | orchestrator | Thursday 05 June 2025 19:39:22 +0000 (0:00:00.203) 0:03:32.602 ********* 2025-06-05 19:46:57.267817 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.267828 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.267834 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.267839 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.267845 | orchestrator | 2025-06-05 19:46:57.267851 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-05 19:46:57.267861 | orchestrator | Thursday 05 June 2025 19:39:22 +0000 (0:00:00.817) 0:03:33.419 ********* 2025-06-05 19:46:57.267867 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.267872 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.267878 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.267883 | orchestrator | 2025-06-05 19:46:57.267889 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-05 19:46:57.267894 | orchestrator | Thursday 05 June 2025 19:39:23 +0000 (0:00:00.259) 0:03:33.679 ********* 2025-06-05 19:46:57.267900 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.267905 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.267911 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.267916 | orchestrator | 2025-06-05 19:46:57.267921 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-05 19:46:57.267927 | orchestrator | Thursday 05 June 2025 19:39:24 +0000 (0:00:01.109) 0:03:34.789 ********* 2025-06-05 19:46:57.267932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.267938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.267943 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.267949 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.267954 | orchestrator | 2025-06-05 19:46:57.267960 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-05 19:46:57.267965 | orchestrator | Thursday 05 June 2025 19:39:25 +0000 (0:00:00.858) 0:03:35.647 ********* 2025-06-05 19:46:57.267971 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.267976 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.267982 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.267987 | orchestrator | 2025-06-05 19:46:57.268009 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-05 19:46:57.268015 | orchestrator | Thursday 05 June 2025 19:39:25 +0000 (0:00:00.293) 0:03:35.940 ********* 2025-06-05 19:46:57.268020 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268026 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268031 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268037 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.268043 | orchestrator | 2025-06-05 19:46:57.268048 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-05 19:46:57.268054 | orchestrator | Thursday 05 June 2025 19:39:26 +0000 (0:00:00.936) 0:03:36.877 ********* 2025-06-05 19:46:57.268059 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.268065 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.268070 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.268076 | orchestrator | 2025-06-05 19:46:57.268081 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-05 19:46:57.268087 | orchestrator | Thursday 05 June 2025 19:39:26 +0000 (0:00:00.323) 0:03:37.201 ********* 2025-06-05 19:46:57.268092 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.268098 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.268103 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.268109 | orchestrator | 2025-06-05 19:46:57.268114 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-05 19:46:57.268120 | orchestrator | Thursday 05 June 2025 19:39:28 +0000 (0:00:01.397) 0:03:38.598 ********* 2025-06-05 19:46:57.268125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.268131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.268136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.268142 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.268147 | orchestrator | 2025-06-05 19:46:57.268153 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-05 19:46:57.268158 | orchestrator | Thursday 05 June 2025 19:39:28 +0000 (0:00:00.813) 0:03:39.412 ********* 2025-06-05 19:46:57.268168 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.268174 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.268179 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.268185 | orchestrator | 2025-06-05 19:46:57.268190 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-05 19:46:57.268196 | orchestrator | Thursday 05 June 2025 19:39:29 +0000 (0:00:00.302) 0:03:39.714 ********* 2025-06-05 19:46:57.268201 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.268207 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.268213 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.268218 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268224 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268229 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268235 | orchestrator | 2025-06-05 19:46:57.268240 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-05 19:46:57.268250 | orchestrator | Thursday 05 June 2025 19:39:29 +0000 (0:00:00.798) 0:03:40.512 ********* 2025-06-05 19:46:57.268256 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.268261 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.268267 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.268272 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.268278 | orchestrator | 2025-06-05 19:46:57.268283 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-05 19:46:57.268289 | orchestrator | Thursday 05 June 2025 19:39:30 +0000 (0:00:00.986) 0:03:41.499 ********* 2025-06-05 19:46:57.268294 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268300 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268305 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268311 | orchestrator | 2025-06-05 19:46:57.268316 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-05 19:46:57.268325 | orchestrator | Thursday 05 June 2025 19:39:31 +0000 (0:00:00.337) 0:03:41.836 ********* 2025-06-05 19:46:57.268331 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.268336 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.268342 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.268347 | orchestrator | 2025-06-05 19:46:57.268353 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-05 19:46:57.268359 | orchestrator | Thursday 05 June 2025 19:39:32 +0000 (0:00:01.128) 0:03:42.964 ********* 2025-06-05 19:46:57.268368 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:46:57.268377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:46:57.268385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:46:57.268393 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268400 | orchestrator | 2025-06-05 19:46:57.268409 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-05 19:46:57.268418 | orchestrator | Thursday 05 June 2025 19:39:33 +0000 (0:00:00.719) 0:03:43.684 ********* 2025-06-05 19:46:57.268427 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268437 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268446 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268455 | orchestrator | 2025-06-05 19:46:57.268464 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-05 19:46:57.268474 | orchestrator | 2025-06-05 19:46:57.268482 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.268491 | orchestrator | Thursday 05 June 2025 19:39:33 +0000 (0:00:00.696) 0:03:44.380 ********* 2025-06-05 19:46:57.268497 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.268503 | orchestrator | 2025-06-05 19:46:57.268509 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.268520 | orchestrator | Thursday 05 June 2025 19:39:34 +0000 (0:00:00.471) 0:03:44.852 ********* 2025-06-05 19:46:57.268525 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.268531 | orchestrator | 2025-06-05 19:46:57.268536 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.268542 | orchestrator | Thursday 05 June 2025 19:39:34 +0000 (0:00:00.559) 0:03:45.412 ********* 2025-06-05 19:46:57.268547 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268553 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268558 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268564 | orchestrator | 2025-06-05 19:46:57.268569 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.268575 | orchestrator | Thursday 05 June 2025 19:39:35 +0000 (0:00:00.676) 0:03:46.088 ********* 2025-06-05 19:46:57.268580 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268586 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268591 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268597 | orchestrator | 2025-06-05 19:46:57.268602 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.268608 | orchestrator | Thursday 05 June 2025 19:39:35 +0000 (0:00:00.285) 0:03:46.374 ********* 2025-06-05 19:46:57.268613 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268619 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268624 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268630 | orchestrator | 2025-06-05 19:46:57.268635 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.268641 | orchestrator | Thursday 05 June 2025 19:39:36 +0000 (0:00:00.240) 0:03:46.614 ********* 2025-06-05 19:46:57.268646 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268651 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268657 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268663 | orchestrator | 2025-06-05 19:46:57.268668 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.268674 | orchestrator | Thursday 05 June 2025 19:39:36 +0000 (0:00:00.428) 0:03:47.043 ********* 2025-06-05 19:46:57.268679 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268684 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268690 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268695 | orchestrator | 2025-06-05 19:46:57.268701 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.268706 | orchestrator | Thursday 05 June 2025 19:39:37 +0000 (0:00:00.686) 0:03:47.729 ********* 2025-06-05 19:46:57.268712 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268717 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268723 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268728 | orchestrator | 2025-06-05 19:46:57.268734 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.268739 | orchestrator | Thursday 05 June 2025 19:39:37 +0000 (0:00:00.290) 0:03:48.019 ********* 2025-06-05 19:46:57.268745 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268750 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268756 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268761 | orchestrator | 2025-06-05 19:46:57.268772 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.268778 | orchestrator | Thursday 05 June 2025 19:39:37 +0000 (0:00:00.247) 0:03:48.267 ********* 2025-06-05 19:46:57.268783 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268788 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268794 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268799 | orchestrator | 2025-06-05 19:46:57.268805 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.268810 | orchestrator | Thursday 05 June 2025 19:39:38 +0000 (0:00:00.872) 0:03:49.139 ********* 2025-06-05 19:46:57.268816 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268829 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268835 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268840 | orchestrator | 2025-06-05 19:46:57.268846 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.268852 | orchestrator | Thursday 05 June 2025 19:39:39 +0000 (0:00:00.692) 0:03:49.831 ********* 2025-06-05 19:46:57.268861 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268867 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268872 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268877 | orchestrator | 2025-06-05 19:46:57.268883 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.268888 | orchestrator | Thursday 05 June 2025 19:39:39 +0000 (0:00:00.282) 0:03:50.114 ********* 2025-06-05 19:46:57.268894 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.268899 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.268905 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.268910 | orchestrator | 2025-06-05 19:46:57.268915 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.268921 | orchestrator | Thursday 05 June 2025 19:39:39 +0000 (0:00:00.302) 0:03:50.417 ********* 2025-06-05 19:46:57.268927 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268932 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268937 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268943 | orchestrator | 2025-06-05 19:46:57.268948 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.268954 | orchestrator | Thursday 05 June 2025 19:39:40 +0000 (0:00:00.542) 0:03:50.959 ********* 2025-06-05 19:46:57.268959 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.268965 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.268970 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.268976 | orchestrator | 2025-06-05 19:46:57.268981 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.268987 | orchestrator | Thursday 05 June 2025 19:39:40 +0000 (0:00:00.310) 0:03:51.270 ********* 2025-06-05 19:46:57.269014 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269020 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.269025 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.269031 | orchestrator | 2025-06-05 19:46:57.269036 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.269041 | orchestrator | Thursday 05 June 2025 19:39:40 +0000 (0:00:00.274) 0:03:51.545 ********* 2025-06-05 19:46:57.269047 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269052 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.269058 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.269063 | orchestrator | 2025-06-05 19:46:57.269068 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.269074 | orchestrator | Thursday 05 June 2025 19:39:41 +0000 (0:00:00.297) 0:03:51.842 ********* 2025-06-05 19:46:57.269079 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269084 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.269090 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.269095 | orchestrator | 2025-06-05 19:46:57.269100 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.269106 | orchestrator | Thursday 05 June 2025 19:39:41 +0000 (0:00:00.516) 0:03:52.359 ********* 2025-06-05 19:46:57.269111 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269116 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269122 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269127 | orchestrator | 2025-06-05 19:46:57.269133 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.269138 | orchestrator | Thursday 05 June 2025 19:39:42 +0000 (0:00:00.304) 0:03:52.663 ********* 2025-06-05 19:46:57.269143 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269149 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269159 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269164 | orchestrator | 2025-06-05 19:46:57.269170 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.269175 | orchestrator | Thursday 05 June 2025 19:39:42 +0000 (0:00:00.304) 0:03:52.968 ********* 2025-06-05 19:46:57.269181 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269186 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269191 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269196 | orchestrator | 2025-06-05 19:46:57.269202 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-05 19:46:57.269207 | orchestrator | Thursday 05 June 2025 19:39:43 +0000 (0:00:00.939) 0:03:53.907 ********* 2025-06-05 19:46:57.269213 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269218 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269223 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269252 | orchestrator | 2025-06-05 19:46:57.269258 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-05 19:46:57.269264 | orchestrator | Thursday 05 June 2025 19:39:43 +0000 (0:00:00.505) 0:03:54.413 ********* 2025-06-05 19:46:57.269269 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.269275 | orchestrator | 2025-06-05 19:46:57.269280 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-05 19:46:57.269286 | orchestrator | Thursday 05 June 2025 19:39:44 +0000 (0:00:00.919) 0:03:55.332 ********* 2025-06-05 19:46:57.269291 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269297 | orchestrator | 2025-06-05 19:46:57.269302 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-05 19:46:57.269311 | orchestrator | Thursday 05 June 2025 19:39:44 +0000 (0:00:00.163) 0:03:55.495 ********* 2025-06-05 19:46:57.269317 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-05 19:46:57.269323 | orchestrator | 2025-06-05 19:46:57.269328 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-05 19:46:57.269333 | orchestrator | Thursday 05 June 2025 19:39:46 +0000 (0:00:01.547) 0:03:57.042 ********* 2025-06-05 19:46:57.269339 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269344 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269350 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269355 | orchestrator | 2025-06-05 19:46:57.269361 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-05 19:46:57.269366 | orchestrator | Thursday 05 June 2025 19:39:46 +0000 (0:00:00.362) 0:03:57.405 ********* 2025-06-05 19:46:57.269372 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269377 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269382 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269388 | orchestrator | 2025-06-05 19:46:57.269393 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-05 19:46:57.269403 | orchestrator | Thursday 05 June 2025 19:39:47 +0000 (0:00:00.355) 0:03:57.760 ********* 2025-06-05 19:46:57.269408 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269439 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269445 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269450 | orchestrator | 2025-06-05 19:46:57.269456 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-05 19:46:57.269461 | orchestrator | Thursday 05 June 2025 19:39:48 +0000 (0:00:01.077) 0:03:58.838 ********* 2025-06-05 19:46:57.269467 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269472 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269478 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269483 | orchestrator | 2025-06-05 19:46:57.269489 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-05 19:46:57.269494 | orchestrator | Thursday 05 June 2025 19:39:49 +0000 (0:00:00.892) 0:03:59.730 ********* 2025-06-05 19:46:57.269500 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269505 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269515 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269521 | orchestrator | 2025-06-05 19:46:57.269526 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-05 19:46:57.269532 | orchestrator | Thursday 05 June 2025 19:39:49 +0000 (0:00:00.687) 0:04:00.417 ********* 2025-06-05 19:46:57.269537 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269543 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269548 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269554 | orchestrator | 2025-06-05 19:46:57.269559 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-05 19:46:57.269565 | orchestrator | Thursday 05 June 2025 19:39:50 +0000 (0:00:00.777) 0:04:01.195 ********* 2025-06-05 19:46:57.269570 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269576 | orchestrator | 2025-06-05 19:46:57.269581 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-05 19:46:57.269586 | orchestrator | Thursday 05 June 2025 19:39:51 +0000 (0:00:01.270) 0:04:02.466 ********* 2025-06-05 19:46:57.269592 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269597 | orchestrator | 2025-06-05 19:46:57.269603 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-05 19:46:57.269609 | orchestrator | Thursday 05 June 2025 19:39:52 +0000 (0:00:00.649) 0:04:03.115 ********* 2025-06-05 19:46:57.269614 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.269620 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.269625 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.269631 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:46:57.269636 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-05 19:46:57.269642 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:46:57.269647 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:46:57.269653 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-05 19:46:57.269658 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:46:57.269664 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-05 19:46:57.269669 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-05 19:46:57.269675 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-05 19:46:57.269684 | orchestrator | 2025-06-05 19:46:57.269693 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-05 19:46:57.269702 | orchestrator | Thursday 05 June 2025 19:39:56 +0000 (0:00:03.655) 0:04:06.771 ********* 2025-06-05 19:46:57.269711 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269719 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269728 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269736 | orchestrator | 2025-06-05 19:46:57.269744 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-05 19:46:57.269753 | orchestrator | Thursday 05 June 2025 19:39:58 +0000 (0:00:01.821) 0:04:08.592 ********* 2025-06-05 19:46:57.269762 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269772 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269781 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269790 | orchestrator | 2025-06-05 19:46:57.269798 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-05 19:46:57.269803 | orchestrator | Thursday 05 June 2025 19:39:58 +0000 (0:00:00.386) 0:04:08.979 ********* 2025-06-05 19:46:57.269809 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.269814 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.269820 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.269825 | orchestrator | 2025-06-05 19:46:57.269831 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-05 19:46:57.269836 | orchestrator | Thursday 05 June 2025 19:39:58 +0000 (0:00:00.309) 0:04:09.288 ********* 2025-06-05 19:46:57.269847 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269853 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269858 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269864 | orchestrator | 2025-06-05 19:46:57.269874 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-05 19:46:57.269879 | orchestrator | Thursday 05 June 2025 19:40:01 +0000 (0:00:02.294) 0:04:11.582 ********* 2025-06-05 19:46:57.269885 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.269890 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.269896 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.269901 | orchestrator | 2025-06-05 19:46:57.269907 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-05 19:46:57.269912 | orchestrator | Thursday 05 June 2025 19:40:02 +0000 (0:00:01.852) 0:04:13.435 ********* 2025-06-05 19:46:57.269918 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269923 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.269928 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.269934 | orchestrator | 2025-06-05 19:46:57.269939 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-05 19:46:57.269945 | orchestrator | Thursday 05 June 2025 19:40:03 +0000 (0:00:00.402) 0:04:13.838 ********* 2025-06-05 19:46:57.269954 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.269960 | orchestrator | 2025-06-05 19:46:57.269965 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-05 19:46:57.269971 | orchestrator | Thursday 05 June 2025 19:40:03 +0000 (0:00:00.565) 0:04:14.403 ********* 2025-06-05 19:46:57.269976 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.269982 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.269987 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270010 | orchestrator | 2025-06-05 19:46:57.270119 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-05 19:46:57.270128 | orchestrator | Thursday 05 June 2025 19:40:04 +0000 (0:00:00.535) 0:04:14.938 ********* 2025-06-05 19:46:57.270134 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270139 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270145 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270150 | orchestrator | 2025-06-05 19:46:57.270156 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-05 19:46:57.270161 | orchestrator | Thursday 05 June 2025 19:40:04 +0000 (0:00:00.338) 0:04:15.277 ********* 2025-06-05 19:46:57.270167 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.270172 | orchestrator | 2025-06-05 19:46:57.270178 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-05 19:46:57.270183 | orchestrator | Thursday 05 June 2025 19:40:05 +0000 (0:00:00.554) 0:04:15.831 ********* 2025-06-05 19:46:57.270188 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.270194 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.270199 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.270205 | orchestrator | 2025-06-05 19:46:57.270210 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-05 19:46:57.270215 | orchestrator | Thursday 05 June 2025 19:40:07 +0000 (0:00:02.706) 0:04:18.538 ********* 2025-06-05 19:46:57.270221 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.270226 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.270232 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.270237 | orchestrator | 2025-06-05 19:46:57.270242 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-05 19:46:57.270248 | orchestrator | Thursday 05 June 2025 19:40:09 +0000 (0:00:01.232) 0:04:19.771 ********* 2025-06-05 19:46:57.270253 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.270259 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.270264 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.270274 | orchestrator | 2025-06-05 19:46:57.270280 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-05 19:46:57.270285 | orchestrator | Thursday 05 June 2025 19:40:11 +0000 (0:00:01.977) 0:04:21.749 ********* 2025-06-05 19:46:57.270291 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.270296 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.270301 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.270307 | orchestrator | 2025-06-05 19:46:57.270312 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-05 19:46:57.270317 | orchestrator | Thursday 05 June 2025 19:40:13 +0000 (0:00:02.006) 0:04:23.755 ********* 2025-06-05 19:46:57.270323 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.270328 | orchestrator | 2025-06-05 19:46:57.270334 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-05 19:46:57.270340 | orchestrator | Thursday 05 June 2025 19:40:14 +0000 (0:00:00.819) 0:04:24.575 ********* 2025-06-05 19:46:57.270345 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-05 19:46:57.270350 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.270356 | orchestrator | 2025-06-05 19:46:57.270361 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-05 19:46:57.270367 | orchestrator | Thursday 05 June 2025 19:40:36 +0000 (0:00:22.012) 0:04:46.587 ********* 2025-06-05 19:46:57.270372 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.270378 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.270383 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.270389 | orchestrator | 2025-06-05 19:46:57.270394 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-05 19:46:57.270400 | orchestrator | Thursday 05 June 2025 19:40:46 +0000 (0:00:10.623) 0:04:57.211 ********* 2025-06-05 19:46:57.270405 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270410 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270416 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270421 | orchestrator | 2025-06-05 19:46:57.270427 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-05 19:46:57.270432 | orchestrator | Thursday 05 June 2025 19:40:46 +0000 (0:00:00.290) 0:04:57.501 ********* 2025-06-05 19:46:57.270460 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-05 19:46:57.270471 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-05 19:46:57.270478 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-05 19:46:57.270485 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-05 19:46:57.270495 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-05 19:46:57.270501 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__d81f569d60f07604e136bc58ebbdd0984f0b060c'}])  2025-06-05 19:46:57.270508 | orchestrator | 2025-06-05 19:46:57.270513 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.270519 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:16.277) 0:05:13.779 ********* 2025-06-05 19:46:57.270524 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270530 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270535 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270541 | orchestrator | 2025-06-05 19:46:57.270546 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-05 19:46:57.270552 | orchestrator | Thursday 05 June 2025 19:41:03 +0000 (0:00:00.314) 0:05:14.094 ********* 2025-06-05 19:46:57.270557 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.270563 | orchestrator | 2025-06-05 19:46:57.270615 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-05 19:46:57.270621 | orchestrator | Thursday 05 June 2025 19:41:04 +0000 (0:00:00.720) 0:05:14.815 ********* 2025-06-05 19:46:57.270626 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.270632 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.270637 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.270643 | orchestrator | 2025-06-05 19:46:57.270648 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-05 19:46:57.270654 | orchestrator | Thursday 05 June 2025 19:41:04 +0000 (0:00:00.296) 0:05:15.112 ********* 2025-06-05 19:46:57.270659 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270665 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270671 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270676 | orchestrator | 2025-06-05 19:46:57.270682 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-05 19:46:57.270687 | orchestrator | Thursday 05 June 2025 19:41:04 +0000 (0:00:00.335) 0:05:15.448 ********* 2025-06-05 19:46:57.270693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:46:57.270699 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:46:57.270704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:46:57.270710 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270715 | orchestrator | 2025-06-05 19:46:57.270721 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-05 19:46:57.270726 | orchestrator | Thursday 05 June 2025 19:41:05 +0000 (0:00:00.794) 0:05:16.242 ********* 2025-06-05 19:46:57.270732 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.270737 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.270743 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.270748 | orchestrator | 2025-06-05 19:46:57.270775 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-05 19:46:57.270781 | orchestrator | 2025-06-05 19:46:57.270787 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.270792 | orchestrator | Thursday 05 June 2025 19:41:06 +0000 (0:00:00.684) 0:05:16.927 ********* 2025-06-05 19:46:57.270798 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.270808 | orchestrator | 2025-06-05 19:46:57.270814 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.270819 | orchestrator | Thursday 05 June 2025 19:41:06 +0000 (0:00:00.424) 0:05:17.351 ********* 2025-06-05 19:46:57.270824 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.270830 | orchestrator | 2025-06-05 19:46:57.270835 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.270844 | orchestrator | Thursday 05 June 2025 19:41:07 +0000 (0:00:00.588) 0:05:17.940 ********* 2025-06-05 19:46:57.270850 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.270855 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.270860 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.270866 | orchestrator | 2025-06-05 19:46:57.270871 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.270877 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:00.667) 0:05:18.608 ********* 2025-06-05 19:46:57.270882 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270888 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270893 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270898 | orchestrator | 2025-06-05 19:46:57.270904 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.270909 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:00.240) 0:05:18.848 ********* 2025-06-05 19:46:57.270915 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270920 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270925 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270931 | orchestrator | 2025-06-05 19:46:57.270936 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.270942 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:00.346) 0:05:19.195 ********* 2025-06-05 19:46:57.270947 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.270953 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.270958 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.270987 | orchestrator | 2025-06-05 19:46:57.271033 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.271040 | orchestrator | Thursday 05 June 2025 19:41:08 +0000 (0:00:00.206) 0:05:19.401 ********* 2025-06-05 19:46:57.271046 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271051 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271057 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271062 | orchestrator | 2025-06-05 19:46:57.271068 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.271073 | orchestrator | Thursday 05 June 2025 19:41:09 +0000 (0:00:00.637) 0:05:20.039 ********* 2025-06-05 19:46:57.271079 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271085 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271090 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271096 | orchestrator | 2025-06-05 19:46:57.271101 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.271107 | orchestrator | Thursday 05 June 2025 19:41:09 +0000 (0:00:00.266) 0:05:20.305 ********* 2025-06-05 19:46:57.271112 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271118 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271123 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271129 | orchestrator | 2025-06-05 19:46:57.271134 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.271140 | orchestrator | Thursday 05 June 2025 19:41:10 +0000 (0:00:00.396) 0:05:20.702 ********* 2025-06-05 19:46:57.271145 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271150 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271155 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271164 | orchestrator | 2025-06-05 19:46:57.271169 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.271174 | orchestrator | Thursday 05 June 2025 19:41:10 +0000 (0:00:00.704) 0:05:21.407 ********* 2025-06-05 19:46:57.271179 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271184 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271189 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271194 | orchestrator | 2025-06-05 19:46:57.271199 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.271204 | orchestrator | Thursday 05 June 2025 19:41:11 +0000 (0:00:00.697) 0:05:22.104 ********* 2025-06-05 19:46:57.271209 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271213 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271218 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271223 | orchestrator | 2025-06-05 19:46:57.271228 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.271233 | orchestrator | Thursday 05 June 2025 19:41:11 +0000 (0:00:00.238) 0:05:22.343 ********* 2025-06-05 19:46:57.271238 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271243 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271248 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271252 | orchestrator | 2025-06-05 19:46:57.271257 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.271262 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.427) 0:05:22.770 ********* 2025-06-05 19:46:57.271267 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271272 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271277 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271282 | orchestrator | 2025-06-05 19:46:57.271287 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.271292 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.238) 0:05:23.008 ********* 2025-06-05 19:46:57.271296 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271301 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271324 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271330 | orchestrator | 2025-06-05 19:46:57.271335 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.271340 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.252) 0:05:23.261 ********* 2025-06-05 19:46:57.271345 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271350 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271354 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271359 | orchestrator | 2025-06-05 19:46:57.271364 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.271369 | orchestrator | Thursday 05 June 2025 19:41:12 +0000 (0:00:00.245) 0:05:23.506 ********* 2025-06-05 19:46:57.271374 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271379 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271384 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271388 | orchestrator | 2025-06-05 19:46:57.271393 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.271402 | orchestrator | Thursday 05 June 2025 19:41:13 +0000 (0:00:00.420) 0:05:23.927 ********* 2025-06-05 19:46:57.271407 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271411 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271416 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271421 | orchestrator | 2025-06-05 19:46:57.271426 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.271431 | orchestrator | Thursday 05 June 2025 19:41:13 +0000 (0:00:00.268) 0:05:24.195 ********* 2025-06-05 19:46:57.271436 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271440 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271445 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271450 | orchestrator | 2025-06-05 19:46:57.271455 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.271464 | orchestrator | Thursday 05 June 2025 19:41:13 +0000 (0:00:00.297) 0:05:24.492 ********* 2025-06-05 19:46:57.271469 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271474 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271479 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271484 | orchestrator | 2025-06-05 19:46:57.271488 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.271493 | orchestrator | Thursday 05 June 2025 19:41:14 +0000 (0:00:00.280) 0:05:24.773 ********* 2025-06-05 19:46:57.271498 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271503 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271508 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271513 | orchestrator | 2025-06-05 19:46:57.271518 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-05 19:46:57.271522 | orchestrator | Thursday 05 June 2025 19:41:14 +0000 (0:00:00.641) 0:05:25.415 ********* 2025-06-05 19:46:57.271527 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-05 19:46:57.271532 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.271537 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.271542 | orchestrator | 2025-06-05 19:46:57.271547 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-05 19:46:57.271552 | orchestrator | Thursday 05 June 2025 19:41:15 +0000 (0:00:00.556) 0:05:25.972 ********* 2025-06-05 19:46:57.271556 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.271561 | orchestrator | 2025-06-05 19:46:57.271566 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-05 19:46:57.271571 | orchestrator | Thursday 05 June 2025 19:41:15 +0000 (0:00:00.439) 0:05:26.411 ********* 2025-06-05 19:46:57.271576 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.271581 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.271586 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.271590 | orchestrator | 2025-06-05 19:46:57.271595 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-05 19:46:57.271600 | orchestrator | Thursday 05 June 2025 19:41:16 +0000 (0:00:00.920) 0:05:27.331 ********* 2025-06-05 19:46:57.271605 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271610 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271615 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271620 | orchestrator | 2025-06-05 19:46:57.271624 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-05 19:46:57.271629 | orchestrator | Thursday 05 June 2025 19:41:17 +0000 (0:00:00.273) 0:05:27.605 ********* 2025-06-05 19:46:57.271634 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.271639 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.271644 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.271649 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-05 19:46:57.271654 | orchestrator | 2025-06-05 19:46:57.271658 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-05 19:46:57.271663 | orchestrator | Thursday 05 June 2025 19:41:27 +0000 (0:00:10.634) 0:05:38.240 ********* 2025-06-05 19:46:57.271668 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271673 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271678 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271683 | orchestrator | 2025-06-05 19:46:57.271687 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-05 19:46:57.271692 | orchestrator | Thursday 05 June 2025 19:41:27 +0000 (0:00:00.277) 0:05:38.517 ********* 2025-06-05 19:46:57.271697 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-05 19:46:57.271702 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-05 19:46:57.271713 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-05 19:46:57.271718 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.271723 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.271727 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.271732 | orchestrator | 2025-06-05 19:46:57.271751 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-05 19:46:57.271757 | orchestrator | Thursday 05 June 2025 19:41:30 +0000 (0:00:02.290) 0:05:40.807 ********* 2025-06-05 19:46:57.271762 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-05 19:46:57.271767 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-05 19:46:57.271772 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-05 19:46:57.271777 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:46:57.271782 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-05 19:46:57.271787 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-05 19:46:57.271792 | orchestrator | 2025-06-05 19:46:57.271796 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-05 19:46:57.271801 | orchestrator | Thursday 05 June 2025 19:41:31 +0000 (0:00:01.365) 0:05:42.173 ********* 2025-06-05 19:46:57.271806 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.271811 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.271816 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.271821 | orchestrator | 2025-06-05 19:46:57.271828 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-05 19:46:57.271833 | orchestrator | Thursday 05 June 2025 19:41:32 +0000 (0:00:00.646) 0:05:42.820 ********* 2025-06-05 19:46:57.271838 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271843 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271848 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271853 | orchestrator | 2025-06-05 19:46:57.271857 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-05 19:46:57.271862 | orchestrator | Thursday 05 June 2025 19:41:32 +0000 (0:00:00.281) 0:05:43.101 ********* 2025-06-05 19:46:57.271867 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271872 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271877 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271882 | orchestrator | 2025-06-05 19:46:57.271886 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-05 19:46:57.271891 | orchestrator | Thursday 05 June 2025 19:41:32 +0000 (0:00:00.247) 0:05:43.349 ********* 2025-06-05 19:46:57.271896 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.271901 | orchestrator | 2025-06-05 19:46:57.271906 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-05 19:46:57.271911 | orchestrator | Thursday 05 June 2025 19:41:33 +0000 (0:00:00.599) 0:05:43.949 ********* 2025-06-05 19:46:57.271916 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271921 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271925 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271930 | orchestrator | 2025-06-05 19:46:57.271935 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-05 19:46:57.271940 | orchestrator | Thursday 05 June 2025 19:41:33 +0000 (0:00:00.261) 0:05:44.211 ********* 2025-06-05 19:46:57.271945 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.271949 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.271954 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.271959 | orchestrator | 2025-06-05 19:46:57.271964 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-05 19:46:57.271969 | orchestrator | Thursday 05 June 2025 19:41:33 +0000 (0:00:00.260) 0:05:44.472 ********* 2025-06-05 19:46:57.271974 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.271982 | orchestrator | 2025-06-05 19:46:57.271987 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-05 19:46:57.272009 | orchestrator | Thursday 05 June 2025 19:41:34 +0000 (0:00:00.631) 0:05:45.103 ********* 2025-06-05 19:46:57.272015 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272020 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272025 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272030 | orchestrator | 2025-06-05 19:46:57.272035 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-05 19:46:57.272040 | orchestrator | Thursday 05 June 2025 19:41:35 +0000 (0:00:01.156) 0:05:46.259 ********* 2025-06-05 19:46:57.272045 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272049 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272054 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272059 | orchestrator | 2025-06-05 19:46:57.272064 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-05 19:46:57.272069 | orchestrator | Thursday 05 June 2025 19:41:36 +0000 (0:00:01.125) 0:05:47.384 ********* 2025-06-05 19:46:57.272073 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272078 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272083 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272088 | orchestrator | 2025-06-05 19:46:57.272093 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-05 19:46:57.272097 | orchestrator | Thursday 05 June 2025 19:41:38 +0000 (0:00:02.112) 0:05:49.497 ********* 2025-06-05 19:46:57.272102 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272107 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272112 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272117 | orchestrator | 2025-06-05 19:46:57.272122 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-05 19:46:57.272127 | orchestrator | Thursday 05 June 2025 19:41:41 +0000 (0:00:02.244) 0:05:51.742 ********* 2025-06-05 19:46:57.272131 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.272136 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.272141 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-05 19:46:57.272146 | orchestrator | 2025-06-05 19:46:57.272151 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-05 19:46:57.272156 | orchestrator | Thursday 05 June 2025 19:41:41 +0000 (0:00:00.382) 0:05:52.124 ********* 2025-06-05 19:46:57.272161 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-05 19:46:57.272183 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-05 19:46:57.272189 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-05 19:46:57.272194 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-05 19:46:57.272199 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-05 19:46:57.272203 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-06-05 19:46:57.272208 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.272213 | orchestrator | 2025-06-05 19:46:57.272218 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-05 19:46:57.272226 | orchestrator | Thursday 05 June 2025 19:42:18 +0000 (0:00:36.622) 0:06:28.747 ********* 2025-06-05 19:46:57.272230 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.272235 | orchestrator | 2025-06-05 19:46:57.272240 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-05 19:46:57.272245 | orchestrator | Thursday 05 June 2025 19:42:19 +0000 (0:00:01.435) 0:06:30.183 ********* 2025-06-05 19:46:57.272254 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.272259 | orchestrator | 2025-06-05 19:46:57.272264 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-05 19:46:57.272269 | orchestrator | Thursday 05 June 2025 19:42:20 +0000 (0:00:00.558) 0:06:30.741 ********* 2025-06-05 19:46:57.272274 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.272278 | orchestrator | 2025-06-05 19:46:57.272295 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-05 19:46:57.272300 | orchestrator | Thursday 05 June 2025 19:42:20 +0000 (0:00:00.115) 0:06:30.857 ********* 2025-06-05 19:46:57.272305 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-05 19:46:57.272310 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-05 19:46:57.272314 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-05 19:46:57.272319 | orchestrator | 2025-06-05 19:46:57.272324 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-05 19:46:57.272329 | orchestrator | Thursday 05 June 2025 19:42:26 +0000 (0:00:06.363) 0:06:37.221 ********* 2025-06-05 19:46:57.272334 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-05 19:46:57.272339 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-05 19:46:57.272343 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-05 19:46:57.272348 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-05 19:46:57.272353 | orchestrator | 2025-06-05 19:46:57.272358 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.272363 | orchestrator | Thursday 05 June 2025 19:42:31 +0000 (0:00:04.859) 0:06:42.081 ********* 2025-06-05 19:46:57.272368 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272373 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272378 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272382 | orchestrator | 2025-06-05 19:46:57.272387 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-05 19:46:57.272392 | orchestrator | Thursday 05 June 2025 19:42:32 +0000 (0:00:00.941) 0:06:43.022 ********* 2025-06-05 19:46:57.272397 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.272402 | orchestrator | 2025-06-05 19:46:57.272407 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-05 19:46:57.272412 | orchestrator | Thursday 05 June 2025 19:42:32 +0000 (0:00:00.482) 0:06:43.505 ********* 2025-06-05 19:46:57.272417 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.272421 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.272426 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.272431 | orchestrator | 2025-06-05 19:46:57.272436 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-05 19:46:57.272441 | orchestrator | Thursday 05 June 2025 19:42:33 +0000 (0:00:00.296) 0:06:43.801 ********* 2025-06-05 19:46:57.272446 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.272451 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.272456 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.272460 | orchestrator | 2025-06-05 19:46:57.272465 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-05 19:46:57.272470 | orchestrator | Thursday 05 June 2025 19:42:35 +0000 (0:00:01.791) 0:06:45.593 ********* 2025-06-05 19:46:57.272475 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-05 19:46:57.272480 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-05 19:46:57.272485 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-05 19:46:57.272490 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.272495 | orchestrator | 2025-06-05 19:46:57.272499 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-05 19:46:57.272508 | orchestrator | Thursday 05 June 2025 19:42:35 +0000 (0:00:00.599) 0:06:46.192 ********* 2025-06-05 19:46:57.272513 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.272518 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.272523 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.272527 | orchestrator | 2025-06-05 19:46:57.272532 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-05 19:46:57.272537 | orchestrator | 2025-06-05 19:46:57.272542 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.272547 | orchestrator | Thursday 05 June 2025 19:42:36 +0000 (0:00:00.532) 0:06:46.725 ********* 2025-06-05 19:46:57.272568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.272574 | orchestrator | 2025-06-05 19:46:57.272579 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.272584 | orchestrator | Thursday 05 June 2025 19:42:36 +0000 (0:00:00.678) 0:06:47.403 ********* 2025-06-05 19:46:57.272589 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.272594 | orchestrator | 2025-06-05 19:46:57.272599 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.272604 | orchestrator | Thursday 05 June 2025 19:42:37 +0000 (0:00:00.489) 0:06:47.893 ********* 2025-06-05 19:46:57.272608 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272613 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272618 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272623 | orchestrator | 2025-06-05 19:46:57.272630 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.272635 | orchestrator | Thursday 05 June 2025 19:42:37 +0000 (0:00:00.269) 0:06:48.163 ********* 2025-06-05 19:46:57.272640 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272645 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272650 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272655 | orchestrator | 2025-06-05 19:46:57.272660 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.272664 | orchestrator | Thursday 05 June 2025 19:42:38 +0000 (0:00:00.974) 0:06:49.138 ********* 2025-06-05 19:46:57.272669 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272674 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272679 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272683 | orchestrator | 2025-06-05 19:46:57.272688 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.272693 | orchestrator | Thursday 05 June 2025 19:42:39 +0000 (0:00:00.698) 0:06:49.837 ********* 2025-06-05 19:46:57.272698 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272703 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272707 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272712 | orchestrator | 2025-06-05 19:46:57.272717 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.272722 | orchestrator | Thursday 05 June 2025 19:42:40 +0000 (0:00:00.724) 0:06:50.561 ********* 2025-06-05 19:46:57.272727 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272732 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272737 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272742 | orchestrator | 2025-06-05 19:46:57.272746 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.272751 | orchestrator | Thursday 05 June 2025 19:42:40 +0000 (0:00:00.274) 0:06:50.835 ********* 2025-06-05 19:46:57.272756 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272761 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272766 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272770 | orchestrator | 2025-06-05 19:46:57.272775 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.272780 | orchestrator | Thursday 05 June 2025 19:42:40 +0000 (0:00:00.571) 0:06:51.407 ********* 2025-06-05 19:46:57.272788 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272793 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272798 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272803 | orchestrator | 2025-06-05 19:46:57.272808 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.272812 | orchestrator | Thursday 05 June 2025 19:42:41 +0000 (0:00:00.287) 0:06:51.695 ********* 2025-06-05 19:46:57.272817 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272822 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272827 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272832 | orchestrator | 2025-06-05 19:46:57.272837 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.272841 | orchestrator | Thursday 05 June 2025 19:42:41 +0000 (0:00:00.745) 0:06:52.440 ********* 2025-06-05 19:46:57.272846 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272851 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272856 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272861 | orchestrator | 2025-06-05 19:46:57.272866 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.272870 | orchestrator | Thursday 05 June 2025 19:42:42 +0000 (0:00:00.731) 0:06:53.172 ********* 2025-06-05 19:46:57.272875 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272880 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272885 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272889 | orchestrator | 2025-06-05 19:46:57.272894 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.272899 | orchestrator | Thursday 05 June 2025 19:42:43 +0000 (0:00:00.519) 0:06:53.691 ********* 2025-06-05 19:46:57.272904 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.272909 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.272914 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.272918 | orchestrator | 2025-06-05 19:46:57.272923 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.272928 | orchestrator | Thursday 05 June 2025 19:42:43 +0000 (0:00:00.284) 0:06:53.976 ********* 2025-06-05 19:46:57.272933 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272937 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272942 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272947 | orchestrator | 2025-06-05 19:46:57.272952 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.272957 | orchestrator | Thursday 05 June 2025 19:42:43 +0000 (0:00:00.320) 0:06:54.296 ********* 2025-06-05 19:46:57.272962 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.272966 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.272971 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.272976 | orchestrator | 2025-06-05 19:46:57.272981 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.272986 | orchestrator | Thursday 05 June 2025 19:42:44 +0000 (0:00:00.384) 0:06:54.681 ********* 2025-06-05 19:46:57.273004 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273009 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273016 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273021 | orchestrator | 2025-06-05 19:46:57.273026 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.273031 | orchestrator | Thursday 05 June 2025 19:42:44 +0000 (0:00:00.609) 0:06:55.291 ********* 2025-06-05 19:46:57.273036 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273041 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273045 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273050 | orchestrator | 2025-06-05 19:46:57.273055 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.273060 | orchestrator | Thursday 05 June 2025 19:42:45 +0000 (0:00:00.324) 0:06:55.615 ********* 2025-06-05 19:46:57.273065 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273073 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273078 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273083 | orchestrator | 2025-06-05 19:46:57.273087 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.273096 | orchestrator | Thursday 05 June 2025 19:42:45 +0000 (0:00:00.286) 0:06:55.902 ********* 2025-06-05 19:46:57.273101 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273106 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273110 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273115 | orchestrator | 2025-06-05 19:46:57.273120 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.273125 | orchestrator | Thursday 05 June 2025 19:42:45 +0000 (0:00:00.306) 0:06:56.208 ********* 2025-06-05 19:46:57.273129 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273134 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273139 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273144 | orchestrator | 2025-06-05 19:46:57.273149 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.273154 | orchestrator | Thursday 05 June 2025 19:42:46 +0000 (0:00:00.540) 0:06:56.749 ********* 2025-06-05 19:46:57.273158 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273163 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273168 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273173 | orchestrator | 2025-06-05 19:46:57.273177 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-05 19:46:57.273182 | orchestrator | Thursday 05 June 2025 19:42:46 +0000 (0:00:00.513) 0:06:57.262 ********* 2025-06-05 19:46:57.273187 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273192 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273196 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273201 | orchestrator | 2025-06-05 19:46:57.273206 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-05 19:46:57.273211 | orchestrator | Thursday 05 June 2025 19:42:47 +0000 (0:00:00.305) 0:06:57.567 ********* 2025-06-05 19:46:57.273216 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:46:57.273221 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:46:57.273226 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:46:57.273231 | orchestrator | 2025-06-05 19:46:57.273235 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-05 19:46:57.273240 | orchestrator | Thursday 05 June 2025 19:42:47 +0000 (0:00:00.827) 0:06:58.395 ********* 2025-06-05 19:46:57.273245 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.273250 | orchestrator | 2025-06-05 19:46:57.273255 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-05 19:46:57.273260 | orchestrator | Thursday 05 June 2025 19:42:48 +0000 (0:00:00.766) 0:06:59.162 ********* 2025-06-05 19:46:57.273265 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273270 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273274 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273279 | orchestrator | 2025-06-05 19:46:57.273284 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-05 19:46:57.273289 | orchestrator | Thursday 05 June 2025 19:42:48 +0000 (0:00:00.306) 0:06:59.469 ********* 2025-06-05 19:46:57.273294 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273299 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273304 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273308 | orchestrator | 2025-06-05 19:46:57.273313 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-05 19:46:57.273318 | orchestrator | Thursday 05 June 2025 19:42:49 +0000 (0:00:00.291) 0:06:59.760 ********* 2025-06-05 19:46:57.273323 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273331 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273336 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273341 | orchestrator | 2025-06-05 19:46:57.273346 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-05 19:46:57.273351 | orchestrator | Thursday 05 June 2025 19:42:50 +0000 (0:00:00.837) 0:07:00.597 ********* 2025-06-05 19:46:57.273356 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273360 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273365 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273370 | orchestrator | 2025-06-05 19:46:57.273375 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-05 19:46:57.273380 | orchestrator | Thursday 05 June 2025 19:42:50 +0000 (0:00:00.269) 0:07:00.866 ********* 2025-06-05 19:46:57.273385 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-05 19:46:57.273389 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-05 19:46:57.273394 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-05 19:46:57.273399 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-05 19:46:57.273407 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-05 19:46:57.273412 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-05 19:46:57.273417 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-05 19:46:57.273422 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-05 19:46:57.273427 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-05 19:46:57.273432 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-05 19:46:57.273437 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-05 19:46:57.273441 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-05 19:46:57.273449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-05 19:46:57.273454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-05 19:46:57.273459 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-05 19:46:57.273464 | orchestrator | 2025-06-05 19:46:57.273469 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-05 19:46:57.273474 | orchestrator | Thursday 05 June 2025 19:42:54 +0000 (0:00:04.026) 0:07:04.893 ********* 2025-06-05 19:46:57.273479 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273484 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273488 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273493 | orchestrator | 2025-06-05 19:46:57.273498 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-05 19:46:57.273503 | orchestrator | Thursday 05 June 2025 19:42:54 +0000 (0:00:00.238) 0:07:05.132 ********* 2025-06-05 19:46:57.273508 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.273513 | orchestrator | 2025-06-05 19:46:57.273518 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-05 19:46:57.273522 | orchestrator | Thursday 05 June 2025 19:42:55 +0000 (0:00:00.580) 0:07:05.712 ********* 2025-06-05 19:46:57.273527 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-05 19:46:57.273532 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-05 19:46:57.273537 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-05 19:46:57.273545 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-05 19:46:57.273550 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-05 19:46:57.273555 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-05 19:46:57.273560 | orchestrator | 2025-06-05 19:46:57.273565 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-05 19:46:57.273570 | orchestrator | Thursday 05 June 2025 19:42:56 +0000 (0:00:00.984) 0:07:06.696 ********* 2025-06-05 19:46:57.273575 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.273580 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.273585 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.273589 | orchestrator | 2025-06-05 19:46:57.273594 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-05 19:46:57.273599 | orchestrator | Thursday 05 June 2025 19:42:58 +0000 (0:00:02.193) 0:07:08.890 ********* 2025-06-05 19:46:57.273604 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 19:46:57.273609 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.273613 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.273618 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 19:46:57.273623 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-05 19:46:57.273628 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.273633 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 19:46:57.273638 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-05 19:46:57.273643 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.273647 | orchestrator | 2025-06-05 19:46:57.273652 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-05 19:46:57.273657 | orchestrator | Thursday 05 June 2025 19:42:59 +0000 (0:00:01.305) 0:07:10.195 ********* 2025-06-05 19:46:57.273662 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.273667 | orchestrator | 2025-06-05 19:46:57.273672 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-05 19:46:57.273676 | orchestrator | Thursday 05 June 2025 19:43:01 +0000 (0:00:02.210) 0:07:12.406 ********* 2025-06-05 19:46:57.273681 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.273686 | orchestrator | 2025-06-05 19:46:57.273691 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-05 19:46:57.273696 | orchestrator | Thursday 05 June 2025 19:43:02 +0000 (0:00:00.439) 0:07:12.846 ********* 2025-06-05 19:46:57.273701 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f5969faa-081d-5d9e-9303-7a3301cb4b7a', 'data_vg': 'ceph-f5969faa-081d-5d9e-9303-7a3301cb4b7a'}) 2025-06-05 19:46:57.273707 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8d24cd11-dfc5-563c-af80-3beb61f8ef58', 'data_vg': 'ceph-8d24cd11-dfc5-563c-af80-3beb61f8ef58'}) 2025-06-05 19:46:57.273714 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-9f7f7c2a-d649-5a85-84b6-7657bf908d98', 'data_vg': 'ceph-9f7f7c2a-d649-5a85-84b6-7657bf908d98'}) 2025-06-05 19:46:57.273720 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-46c2c746-0272-5326-baff-0a3e04c6e4bf', 'data_vg': 'ceph-46c2c746-0272-5326-baff-0a3e04c6e4bf'}) 2025-06-05 19:46:57.273725 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-67c48ddb-095b-5044-89f7-89f2250f1a91', 'data_vg': 'ceph-67c48ddb-095b-5044-89f7-89f2250f1a91'}) 2025-06-05 19:46:57.273730 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-afd5871a-1fd2-5e8b-989c-517ad42902e5', 'data_vg': 'ceph-afd5871a-1fd2-5e8b-989c-517ad42902e5'}) 2025-06-05 19:46:57.273734 | orchestrator | 2025-06-05 19:46:57.273739 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-05 19:46:57.273747 | orchestrator | Thursday 05 June 2025 19:43:39 +0000 (0:00:37.204) 0:07:50.050 ********* 2025-06-05 19:46:57.273755 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273760 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273765 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273770 | orchestrator | 2025-06-05 19:46:57.273775 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-05 19:46:57.273780 | orchestrator | Thursday 05 June 2025 19:43:40 +0000 (0:00:00.524) 0:07:50.575 ********* 2025-06-05 19:46:57.273785 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.273789 | orchestrator | 2025-06-05 19:46:57.273794 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-05 19:46:57.273799 | orchestrator | Thursday 05 June 2025 19:43:40 +0000 (0:00:00.516) 0:07:51.091 ********* 2025-06-05 19:46:57.273804 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273809 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273814 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273819 | orchestrator | 2025-06-05 19:46:57.273824 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-05 19:46:57.273829 | orchestrator | Thursday 05 June 2025 19:43:41 +0000 (0:00:00.612) 0:07:51.704 ********* 2025-06-05 19:46:57.273834 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.273839 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.273844 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.273848 | orchestrator | 2025-06-05 19:46:57.273853 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-05 19:46:57.273858 | orchestrator | Thursday 05 June 2025 19:43:44 +0000 (0:00:02.922) 0:07:54.626 ********* 2025-06-05 19:46:57.273863 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.273868 | orchestrator | 2025-06-05 19:46:57.273873 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-05 19:46:57.273878 | orchestrator | Thursday 05 June 2025 19:43:44 +0000 (0:00:00.499) 0:07:55.125 ********* 2025-06-05 19:46:57.273883 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.273888 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.273893 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.273897 | orchestrator | 2025-06-05 19:46:57.273902 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-05 19:46:57.273907 | orchestrator | Thursday 05 June 2025 19:43:45 +0000 (0:00:01.085) 0:07:56.211 ********* 2025-06-05 19:46:57.273912 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.273917 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.273922 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.273927 | orchestrator | 2025-06-05 19:46:57.273931 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-05 19:46:57.273936 | orchestrator | Thursday 05 June 2025 19:43:46 +0000 (0:00:01.315) 0:07:57.526 ********* 2025-06-05 19:46:57.273941 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.273946 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.273951 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.273956 | orchestrator | 2025-06-05 19:46:57.273961 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-05 19:46:57.273966 | orchestrator | Thursday 05 June 2025 19:43:48 +0000 (0:00:01.814) 0:07:59.341 ********* 2025-06-05 19:46:57.273971 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.273975 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.273980 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.273985 | orchestrator | 2025-06-05 19:46:57.274002 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-05 19:46:57.274007 | orchestrator | Thursday 05 June 2025 19:43:49 +0000 (0:00:00.290) 0:07:59.632 ********* 2025-06-05 19:46:57.274012 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274040 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274045 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274055 | orchestrator | 2025-06-05 19:46:57.274060 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-05 19:46:57.274065 | orchestrator | Thursday 05 June 2025 19:43:49 +0000 (0:00:00.312) 0:07:59.944 ********* 2025-06-05 19:46:57.274069 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-06-05 19:46:57.274074 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-05 19:46:57.274079 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-05 19:46:57.274084 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-05 19:46:57.274089 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-06-05 19:46:57.274094 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-05 19:46:57.274098 | orchestrator | 2025-06-05 19:46:57.274103 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-05 19:46:57.274108 | orchestrator | Thursday 05 June 2025 19:43:50 +0000 (0:00:01.258) 0:08:01.203 ********* 2025-06-05 19:46:57.274113 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-05 19:46:57.274118 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-05 19:46:57.274123 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-05 19:46:57.274128 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-05 19:46:57.274135 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-05 19:46:57.274140 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-05 19:46:57.274145 | orchestrator | 2025-06-05 19:46:57.274150 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-05 19:46:57.274155 | orchestrator | Thursday 05 June 2025 19:43:52 +0000 (0:00:02.118) 0:08:03.321 ********* 2025-06-05 19:46:57.274160 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-05 19:46:57.274165 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-06-05 19:46:57.274169 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-05 19:46:57.274174 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-05 19:46:57.274179 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-05 19:46:57.274184 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-05 19:46:57.274189 | orchestrator | 2025-06-05 19:46:57.274193 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-05 19:46:57.274198 | orchestrator | Thursday 05 June 2025 19:43:56 +0000 (0:00:03.499) 0:08:06.821 ********* 2025-06-05 19:46:57.274206 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274211 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274216 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.274221 | orchestrator | 2025-06-05 19:46:57.274226 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-05 19:46:57.274230 | orchestrator | Thursday 05 June 2025 19:43:59 +0000 (0:00:02.726) 0:08:09.547 ********* 2025-06-05 19:46:57.274235 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274240 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274245 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-05 19:46:57.274250 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.274254 | orchestrator | 2025-06-05 19:46:57.274259 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-05 19:46:57.274264 | orchestrator | Thursday 05 June 2025 19:44:12 +0000 (0:00:13.259) 0:08:22.807 ********* 2025-06-05 19:46:57.274269 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274274 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274278 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274283 | orchestrator | 2025-06-05 19:46:57.274288 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.274293 | orchestrator | Thursday 05 June 2025 19:44:13 +0000 (0:00:00.873) 0:08:23.680 ********* 2025-06-05 19:46:57.274298 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274303 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274311 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274316 | orchestrator | 2025-06-05 19:46:57.274320 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-05 19:46:57.274325 | orchestrator | Thursday 05 June 2025 19:44:13 +0000 (0:00:00.759) 0:08:24.440 ********* 2025-06-05 19:46:57.274330 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.274335 | orchestrator | 2025-06-05 19:46:57.274340 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-05 19:46:57.274345 | orchestrator | Thursday 05 June 2025 19:44:14 +0000 (0:00:00.567) 0:08:25.008 ********* 2025-06-05 19:46:57.274350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.274355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.274359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.274364 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274369 | orchestrator | 2025-06-05 19:46:57.274374 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-05 19:46:57.274379 | orchestrator | Thursday 05 June 2025 19:44:14 +0000 (0:00:00.401) 0:08:25.410 ********* 2025-06-05 19:46:57.274383 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274388 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274393 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274398 | orchestrator | 2025-06-05 19:46:57.274403 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-05 19:46:57.274408 | orchestrator | Thursday 05 June 2025 19:44:15 +0000 (0:00:00.320) 0:08:25.730 ********* 2025-06-05 19:46:57.274412 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274417 | orchestrator | 2025-06-05 19:46:57.274422 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-05 19:46:57.274427 | orchestrator | Thursday 05 June 2025 19:44:15 +0000 (0:00:00.219) 0:08:25.949 ********* 2025-06-05 19:46:57.274432 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274436 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274441 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274446 | orchestrator | 2025-06-05 19:46:57.274451 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-05 19:46:57.274456 | orchestrator | Thursday 05 June 2025 19:44:15 +0000 (0:00:00.577) 0:08:26.527 ********* 2025-06-05 19:46:57.274460 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274465 | orchestrator | 2025-06-05 19:46:57.274470 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-05 19:46:57.274475 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.260) 0:08:26.788 ********* 2025-06-05 19:46:57.274480 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274485 | orchestrator | 2025-06-05 19:46:57.274489 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-05 19:46:57.274494 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.214) 0:08:27.002 ********* 2025-06-05 19:46:57.274499 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274504 | orchestrator | 2025-06-05 19:46:57.274509 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-05 19:46:57.274513 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.110) 0:08:27.113 ********* 2025-06-05 19:46:57.274518 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274523 | orchestrator | 2025-06-05 19:46:57.274530 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-05 19:46:57.274536 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.200) 0:08:27.313 ********* 2025-06-05 19:46:57.274540 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274545 | orchestrator | 2025-06-05 19:46:57.274550 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-05 19:46:57.274555 | orchestrator | Thursday 05 June 2025 19:44:16 +0000 (0:00:00.212) 0:08:27.526 ********* 2025-06-05 19:46:57.274563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.274568 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.274573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.274578 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274583 | orchestrator | 2025-06-05 19:46:57.274588 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-05 19:46:57.274596 | orchestrator | Thursday 05 June 2025 19:44:17 +0000 (0:00:00.354) 0:08:27.880 ********* 2025-06-05 19:46:57.274601 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274606 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274610 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274615 | orchestrator | 2025-06-05 19:46:57.274620 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-05 19:46:57.274625 | orchestrator | Thursday 05 June 2025 19:44:17 +0000 (0:00:00.291) 0:08:28.172 ********* 2025-06-05 19:46:57.274630 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274635 | orchestrator | 2025-06-05 19:46:57.274640 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-05 19:46:57.274645 | orchestrator | Thursday 05 June 2025 19:44:18 +0000 (0:00:00.768) 0:08:28.941 ********* 2025-06-05 19:46:57.274649 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274654 | orchestrator | 2025-06-05 19:46:57.274659 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-05 19:46:57.274664 | orchestrator | 2025-06-05 19:46:57.274669 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.274674 | orchestrator | Thursday 05 June 2025 19:44:19 +0000 (0:00:00.629) 0:08:29.571 ********* 2025-06-05 19:46:57.274679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.274684 | orchestrator | 2025-06-05 19:46:57.274689 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.274694 | orchestrator | Thursday 05 June 2025 19:44:20 +0000 (0:00:01.188) 0:08:30.759 ********* 2025-06-05 19:46:57.274699 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.274704 | orchestrator | 2025-06-05 19:46:57.274709 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.274714 | orchestrator | Thursday 05 June 2025 19:44:21 +0000 (0:00:01.177) 0:08:31.937 ********* 2025-06-05 19:46:57.274719 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274724 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274729 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274734 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.274739 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.274743 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.274748 | orchestrator | 2025-06-05 19:46:57.274753 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.274758 | orchestrator | Thursday 05 June 2025 19:44:22 +0000 (0:00:01.209) 0:08:33.146 ********* 2025-06-05 19:46:57.274763 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.274768 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.274772 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.274777 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.274782 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.274787 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.274792 | orchestrator | 2025-06-05 19:46:57.274797 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.274802 | orchestrator | Thursday 05 June 2025 19:44:23 +0000 (0:00:00.731) 0:08:33.878 ********* 2025-06-05 19:46:57.274806 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.274815 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.274819 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.274824 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.274829 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.274834 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.274839 | orchestrator | 2025-06-05 19:46:57.274844 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.274849 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:00.844) 0:08:34.723 ********* 2025-06-05 19:46:57.274854 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.274858 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.274863 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.274868 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.274873 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.274878 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.274883 | orchestrator | 2025-06-05 19:46:57.274887 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.274892 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:00.727) 0:08:35.450 ********* 2025-06-05 19:46:57.274897 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274902 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274907 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274912 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.274916 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.274921 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.274926 | orchestrator | 2025-06-05 19:46:57.274931 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.274936 | orchestrator | Thursday 05 June 2025 19:44:26 +0000 (0:00:01.235) 0:08:36.685 ********* 2025-06-05 19:46:57.274941 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.274945 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.274953 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.274958 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.274963 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.274968 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.274972 | orchestrator | 2025-06-05 19:46:57.274977 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.274982 | orchestrator | Thursday 05 June 2025 19:44:26 +0000 (0:00:00.616) 0:08:37.302 ********* 2025-06-05 19:46:57.274987 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275021 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275027 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275032 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275036 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275041 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275046 | orchestrator | 2025-06-05 19:46:57.275051 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.275056 | orchestrator | Thursday 05 June 2025 19:44:27 +0000 (0:00:00.790) 0:08:38.092 ********* 2025-06-05 19:46:57.275061 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275066 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275071 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275075 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275080 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275085 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275090 | orchestrator | 2025-06-05 19:46:57.275095 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.275099 | orchestrator | Thursday 05 June 2025 19:44:28 +0000 (0:00:01.056) 0:08:39.149 ********* 2025-06-05 19:46:57.275104 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275109 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275114 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275119 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275124 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275128 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275137 | orchestrator | 2025-06-05 19:46:57.275142 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.275146 | orchestrator | Thursday 05 June 2025 19:44:29 +0000 (0:00:01.217) 0:08:40.366 ********* 2025-06-05 19:46:57.275151 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275155 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275161 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275165 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275170 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275174 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275179 | orchestrator | 2025-06-05 19:46:57.275183 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.275188 | orchestrator | Thursday 05 June 2025 19:44:30 +0000 (0:00:00.492) 0:08:40.858 ********* 2025-06-05 19:46:57.275193 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275197 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275202 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275206 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275211 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275215 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275220 | orchestrator | 2025-06-05 19:46:57.275225 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.275229 | orchestrator | Thursday 05 June 2025 19:44:30 +0000 (0:00:00.637) 0:08:41.496 ********* 2025-06-05 19:46:57.275234 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275238 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275243 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275248 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275252 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275257 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275261 | orchestrator | 2025-06-05 19:46:57.275266 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.275270 | orchestrator | Thursday 05 June 2025 19:44:31 +0000 (0:00:00.486) 0:08:41.983 ********* 2025-06-05 19:46:57.275275 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275280 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275284 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275289 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275293 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275298 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275303 | orchestrator | 2025-06-05 19:46:57.275307 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.275312 | orchestrator | Thursday 05 June 2025 19:44:32 +0000 (0:00:00.693) 0:08:42.677 ********* 2025-06-05 19:46:57.275316 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275321 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275325 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275330 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275335 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275364 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275369 | orchestrator | 2025-06-05 19:46:57.275374 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.275379 | orchestrator | Thursday 05 June 2025 19:44:32 +0000 (0:00:00.616) 0:08:43.293 ********* 2025-06-05 19:46:57.275383 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275388 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275392 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275397 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275401 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275406 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275410 | orchestrator | 2025-06-05 19:46:57.275415 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.275419 | orchestrator | Thursday 05 June 2025 19:44:33 +0000 (0:00:00.890) 0:08:44.183 ********* 2025-06-05 19:46:57.275427 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275432 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275437 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275441 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:46:57.275446 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:46:57.275450 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:46:57.275455 | orchestrator | 2025-06-05 19:46:57.275459 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.275464 | orchestrator | Thursday 05 June 2025 19:44:34 +0000 (0:00:00.678) 0:08:44.862 ********* 2025-06-05 19:46:57.275471 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.275476 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.275480 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.275485 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275489 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275494 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275498 | orchestrator | 2025-06-05 19:46:57.275503 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.275508 | orchestrator | Thursday 05 June 2025 19:44:34 +0000 (0:00:00.672) 0:08:45.535 ********* 2025-06-05 19:46:57.275512 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275517 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275521 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275526 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275530 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275535 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275539 | orchestrator | 2025-06-05 19:46:57.275544 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.275551 | orchestrator | Thursday 05 June 2025 19:44:35 +0000 (0:00:00.568) 0:08:46.103 ********* 2025-06-05 19:46:57.275556 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275560 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275565 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275569 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275573 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275578 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275582 | orchestrator | 2025-06-05 19:46:57.275587 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-05 19:46:57.275592 | orchestrator | Thursday 05 June 2025 19:44:36 +0000 (0:00:00.956) 0:08:47.059 ********* 2025-06-05 19:46:57.275596 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.275601 | orchestrator | 2025-06-05 19:46:57.275605 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-05 19:46:57.275610 | orchestrator | Thursday 05 June 2025 19:44:40 +0000 (0:00:04.066) 0:08:51.126 ********* 2025-06-05 19:46:57.275614 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.275619 | orchestrator | 2025-06-05 19:46:57.275623 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-05 19:46:57.275628 | orchestrator | Thursday 05 June 2025 19:44:42 +0000 (0:00:01.961) 0:08:53.087 ********* 2025-06-05 19:46:57.275633 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.275637 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.275642 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.275646 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275651 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.275655 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.275660 | orchestrator | 2025-06-05 19:46:57.275664 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-05 19:46:57.275669 | orchestrator | Thursday 05 June 2025 19:44:44 +0000 (0:00:01.801) 0:08:54.889 ********* 2025-06-05 19:46:57.275673 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.275678 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.275682 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.275687 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.275695 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.275699 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.275704 | orchestrator | 2025-06-05 19:46:57.275708 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-05 19:46:57.275713 | orchestrator | Thursday 05 June 2025 19:44:45 +0000 (0:00:01.052) 0:08:55.942 ********* 2025-06-05 19:46:57.275717 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.275723 | orchestrator | 2025-06-05 19:46:57.275727 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-05 19:46:57.275732 | orchestrator | Thursday 05 June 2025 19:44:46 +0000 (0:00:01.032) 0:08:56.975 ********* 2025-06-05 19:46:57.275736 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.275741 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.275745 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.275750 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.275754 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.275759 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.275763 | orchestrator | 2025-06-05 19:46:57.275768 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-05 19:46:57.275773 | orchestrator | Thursday 05 June 2025 19:44:48 +0000 (0:00:01.817) 0:08:58.793 ********* 2025-06-05 19:46:57.275777 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.275782 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.275786 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.275791 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.275795 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.275800 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.275804 | orchestrator | 2025-06-05 19:46:57.275809 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-05 19:46:57.275813 | orchestrator | Thursday 05 June 2025 19:44:51 +0000 (0:00:03.077) 0:09:01.871 ********* 2025-06-05 19:46:57.275818 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:46:57.275823 | orchestrator | 2025-06-05 19:46:57.275827 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-05 19:46:57.275832 | orchestrator | Thursday 05 June 2025 19:44:52 +0000 (0:00:00.947) 0:09:02.818 ********* 2025-06-05 19:46:57.275873 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275878 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275882 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275887 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275892 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275896 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275901 | orchestrator | 2025-06-05 19:46:57.275905 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-05 19:46:57.275910 | orchestrator | Thursday 05 June 2025 19:44:52 +0000 (0:00:00.628) 0:09:03.447 ********* 2025-06-05 19:46:57.275918 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.275922 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.275927 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.275931 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:46:57.275936 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:46:57.275941 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:46:57.275945 | orchestrator | 2025-06-05 19:46:57.275950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-05 19:46:57.275954 | orchestrator | Thursday 05 June 2025 19:44:55 +0000 (0:00:02.528) 0:09:05.976 ********* 2025-06-05 19:46:57.275959 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.275964 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.275968 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.275973 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:46:57.275981 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:46:57.275986 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:46:57.275999 | orchestrator | 2025-06-05 19:46:57.276004 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-05 19:46:57.276009 | orchestrator | 2025-06-05 19:46:57.276016 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.276021 | orchestrator | Thursday 05 June 2025 19:44:56 +0000 (0:00:01.027) 0:09:07.004 ********* 2025-06-05 19:46:57.276026 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.276030 | orchestrator | 2025-06-05 19:46:57.276035 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.276040 | orchestrator | Thursday 05 June 2025 19:44:56 +0000 (0:00:00.479) 0:09:07.483 ********* 2025-06-05 19:46:57.276044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.276049 | orchestrator | 2025-06-05 19:46:57.276053 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.276058 | orchestrator | Thursday 05 June 2025 19:44:57 +0000 (0:00:00.700) 0:09:08.184 ********* 2025-06-05 19:46:57.276062 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276067 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276072 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276076 | orchestrator | 2025-06-05 19:46:57.276081 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.276085 | orchestrator | Thursday 05 June 2025 19:44:57 +0000 (0:00:00.308) 0:09:08.492 ********* 2025-06-05 19:46:57.276090 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276094 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276099 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276103 | orchestrator | 2025-06-05 19:46:57.276108 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.276112 | orchestrator | Thursday 05 June 2025 19:44:58 +0000 (0:00:00.652) 0:09:09.145 ********* 2025-06-05 19:46:57.276117 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276121 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276126 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276130 | orchestrator | 2025-06-05 19:46:57.276135 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.276140 | orchestrator | Thursday 05 June 2025 19:44:59 +0000 (0:00:00.967) 0:09:10.112 ********* 2025-06-05 19:46:57.276144 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276149 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276153 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276158 | orchestrator | 2025-06-05 19:46:57.276162 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.276167 | orchestrator | Thursday 05 June 2025 19:45:00 +0000 (0:00:00.698) 0:09:10.811 ********* 2025-06-05 19:46:57.276172 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276176 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276181 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276185 | orchestrator | 2025-06-05 19:46:57.276190 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.276194 | orchestrator | Thursday 05 June 2025 19:45:00 +0000 (0:00:00.363) 0:09:11.175 ********* 2025-06-05 19:46:57.276199 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276203 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276208 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276212 | orchestrator | 2025-06-05 19:46:57.276217 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.276221 | orchestrator | Thursday 05 June 2025 19:45:00 +0000 (0:00:00.290) 0:09:11.465 ********* 2025-06-05 19:46:57.276226 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276234 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276238 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276243 | orchestrator | 2025-06-05 19:46:57.276248 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.276252 | orchestrator | Thursday 05 June 2025 19:45:01 +0000 (0:00:00.546) 0:09:12.011 ********* 2025-06-05 19:46:57.276257 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276261 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276266 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276270 | orchestrator | 2025-06-05 19:46:57.276275 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.276280 | orchestrator | Thursday 05 June 2025 19:45:02 +0000 (0:00:00.684) 0:09:12.696 ********* 2025-06-05 19:46:57.276284 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276289 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276293 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276298 | orchestrator | 2025-06-05 19:46:57.276302 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.276307 | orchestrator | Thursday 05 June 2025 19:45:02 +0000 (0:00:00.703) 0:09:13.399 ********* 2025-06-05 19:46:57.276311 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276316 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276320 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276325 | orchestrator | 2025-06-05 19:46:57.276330 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.276334 | orchestrator | Thursday 05 June 2025 19:45:03 +0000 (0:00:00.291) 0:09:13.691 ********* 2025-06-05 19:46:57.276342 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276347 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276351 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276356 | orchestrator | 2025-06-05 19:46:57.276361 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.276365 | orchestrator | Thursday 05 June 2025 19:45:03 +0000 (0:00:00.557) 0:09:14.249 ********* 2025-06-05 19:46:57.276370 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276374 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276379 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276383 | orchestrator | 2025-06-05 19:46:57.276388 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.276392 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.346) 0:09:14.595 ********* 2025-06-05 19:46:57.276397 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276401 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276406 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276410 | orchestrator | 2025-06-05 19:46:57.276415 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.276422 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.404) 0:09:15.000 ********* 2025-06-05 19:46:57.276427 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276431 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276436 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276440 | orchestrator | 2025-06-05 19:46:57.276445 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.276450 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.372) 0:09:15.372 ********* 2025-06-05 19:46:57.276454 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276459 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276463 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276468 | orchestrator | 2025-06-05 19:46:57.276472 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.276477 | orchestrator | Thursday 05 June 2025 19:45:05 +0000 (0:00:00.564) 0:09:15.937 ********* 2025-06-05 19:46:57.276481 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276486 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276490 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276500 | orchestrator | 2025-06-05 19:46:57.276505 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.276509 | orchestrator | Thursday 05 June 2025 19:45:05 +0000 (0:00:00.314) 0:09:16.251 ********* 2025-06-05 19:46:57.276514 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276518 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276523 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276527 | orchestrator | 2025-06-05 19:46:57.276532 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.276537 | orchestrator | Thursday 05 June 2025 19:45:05 +0000 (0:00:00.293) 0:09:16.545 ********* 2025-06-05 19:46:57.276541 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276546 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276550 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276555 | orchestrator | 2025-06-05 19:46:57.276559 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.276564 | orchestrator | Thursday 05 June 2025 19:45:06 +0000 (0:00:00.292) 0:09:16.838 ********* 2025-06-05 19:46:57.276569 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.276573 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.276578 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.276582 | orchestrator | 2025-06-05 19:46:57.276587 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-05 19:46:57.276591 | orchestrator | Thursday 05 June 2025 19:45:07 +0000 (0:00:00.759) 0:09:17.598 ********* 2025-06-05 19:46:57.276596 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276600 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276605 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-05 19:46:57.276610 | orchestrator | 2025-06-05 19:46:57.276614 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-05 19:46:57.276619 | orchestrator | Thursday 05 June 2025 19:45:07 +0000 (0:00:00.382) 0:09:17.981 ********* 2025-06-05 19:46:57.276623 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.276628 | orchestrator | 2025-06-05 19:46:57.276633 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-05 19:46:57.276637 | orchestrator | Thursday 05 June 2025 19:45:09 +0000 (0:00:02.267) 0:09:20.248 ********* 2025-06-05 19:46:57.276643 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-05 19:46:57.276649 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276654 | orchestrator | 2025-06-05 19:46:57.276659 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-05 19:46:57.276663 | orchestrator | Thursday 05 June 2025 19:45:09 +0000 (0:00:00.212) 0:09:20.461 ********* 2025-06-05 19:46:57.276669 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:46:57.276678 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:46:57.276683 | orchestrator | 2025-06-05 19:46:57.276688 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-05 19:46:57.276695 | orchestrator | Thursday 05 June 2025 19:45:18 +0000 (0:00:08.548) 0:09:29.009 ********* 2025-06-05 19:46:57.276699 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 19:46:57.276704 | orchestrator | 2025-06-05 19:46:57.276709 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-05 19:46:57.276717 | orchestrator | Thursday 05 June 2025 19:45:22 +0000 (0:00:03.723) 0:09:32.733 ********* 2025-06-05 19:46:57.276721 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.276726 | orchestrator | 2025-06-05 19:46:57.276730 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-05 19:46:57.276735 | orchestrator | Thursday 05 June 2025 19:45:22 +0000 (0:00:00.526) 0:09:33.259 ********* 2025-06-05 19:46:57.276740 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-05 19:46:57.276744 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-05 19:46:57.276752 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-05 19:46:57.276756 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-05 19:46:57.276761 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-05 19:46:57.276765 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-05 19:46:57.276770 | orchestrator | 2025-06-05 19:46:57.276774 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-05 19:46:57.276779 | orchestrator | Thursday 05 June 2025 19:45:23 +0000 (0:00:01.019) 0:09:34.278 ********* 2025-06-05 19:46:57.276784 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.276788 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.276793 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.276797 | orchestrator | 2025-06-05 19:46:57.276802 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-05 19:46:57.276806 | orchestrator | Thursday 05 June 2025 19:45:26 +0000 (0:00:02.516) 0:09:36.795 ********* 2025-06-05 19:46:57.276811 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 19:46:57.276815 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.276820 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.276824 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 19:46:57.276829 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-05 19:46:57.276834 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.276838 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 19:46:57.276843 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-05 19:46:57.276847 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.276852 | orchestrator | 2025-06-05 19:46:57.276856 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-05 19:46:57.276861 | orchestrator | Thursday 05 June 2025 19:45:27 +0000 (0:00:01.389) 0:09:38.184 ********* 2025-06-05 19:46:57.276865 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.276870 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.276874 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.276879 | orchestrator | 2025-06-05 19:46:57.276883 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-05 19:46:57.276888 | orchestrator | Thursday 05 June 2025 19:45:30 +0000 (0:00:02.484) 0:09:40.668 ********* 2025-06-05 19:46:57.276893 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.276897 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.276902 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.276906 | orchestrator | 2025-06-05 19:46:57.276911 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-05 19:46:57.276915 | orchestrator | Thursday 05 June 2025 19:45:30 +0000 (0:00:00.277) 0:09:40.946 ********* 2025-06-05 19:46:57.276920 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.276925 | orchestrator | 2025-06-05 19:46:57.276929 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-05 19:46:57.276934 | orchestrator | Thursday 05 June 2025 19:45:31 +0000 (0:00:00.756) 0:09:41.702 ********* 2025-06-05 19:46:57.276941 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.276946 | orchestrator | 2025-06-05 19:46:57.276950 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-05 19:46:57.276955 | orchestrator | Thursday 05 June 2025 19:45:31 +0000 (0:00:00.509) 0:09:42.212 ********* 2025-06-05 19:46:57.276959 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.276964 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.276969 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.276973 | orchestrator | 2025-06-05 19:46:57.276978 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-05 19:46:57.276982 | orchestrator | Thursday 05 June 2025 19:45:32 +0000 (0:00:01.127) 0:09:43.339 ********* 2025-06-05 19:46:57.276987 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277006 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277011 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277015 | orchestrator | 2025-06-05 19:46:57.277020 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-05 19:46:57.277024 | orchestrator | Thursday 05 June 2025 19:45:34 +0000 (0:00:01.332) 0:09:44.672 ********* 2025-06-05 19:46:57.277029 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277033 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277038 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277042 | orchestrator | 2025-06-05 19:46:57.277047 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-05 19:46:57.277051 | orchestrator | Thursday 05 June 2025 19:45:35 +0000 (0:00:01.736) 0:09:46.409 ********* 2025-06-05 19:46:57.277056 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277063 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277068 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277072 | orchestrator | 2025-06-05 19:46:57.277077 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-05 19:46:57.277082 | orchestrator | Thursday 05 June 2025 19:45:37 +0000 (0:00:01.883) 0:09:48.292 ********* 2025-06-05 19:46:57.277086 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277091 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277095 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277100 | orchestrator | 2025-06-05 19:46:57.277104 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.277109 | orchestrator | Thursday 05 June 2025 19:45:39 +0000 (0:00:01.463) 0:09:49.756 ********* 2025-06-05 19:46:57.277113 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277118 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277122 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277127 | orchestrator | 2025-06-05 19:46:57.277131 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-05 19:46:57.277139 | orchestrator | Thursday 05 June 2025 19:45:39 +0000 (0:00:00.652) 0:09:50.409 ********* 2025-06-05 19:46:57.277143 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.277148 | orchestrator | 2025-06-05 19:46:57.277152 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-05 19:46:57.277157 | orchestrator | Thursday 05 June 2025 19:45:40 +0000 (0:00:00.691) 0:09:51.101 ********* 2025-06-05 19:46:57.277162 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277166 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277171 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277175 | orchestrator | 2025-06-05 19:46:57.277180 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-05 19:46:57.277184 | orchestrator | Thursday 05 June 2025 19:45:40 +0000 (0:00:00.313) 0:09:51.414 ********* 2025-06-05 19:46:57.277189 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277193 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277201 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277205 | orchestrator | 2025-06-05 19:46:57.277210 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-05 19:46:57.277215 | orchestrator | Thursday 05 June 2025 19:45:42 +0000 (0:00:01.260) 0:09:52.674 ********* 2025-06-05 19:46:57.277219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.277224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.277228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.277233 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277237 | orchestrator | 2025-06-05 19:46:57.277242 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-05 19:46:57.277246 | orchestrator | Thursday 05 June 2025 19:45:42 +0000 (0:00:00.786) 0:09:53.461 ********* 2025-06-05 19:46:57.277251 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277255 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277260 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277264 | orchestrator | 2025-06-05 19:46:57.277269 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-05 19:46:57.277273 | orchestrator | 2025-06-05 19:46:57.277278 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-05 19:46:57.277282 | orchestrator | Thursday 05 June 2025 19:45:43 +0000 (0:00:00.745) 0:09:54.207 ********* 2025-06-05 19:46:57.277287 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.277291 | orchestrator | 2025-06-05 19:46:57.277296 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-05 19:46:57.277301 | orchestrator | Thursday 05 June 2025 19:45:44 +0000 (0:00:00.522) 0:09:54.729 ********* 2025-06-05 19:46:57.277305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.277310 | orchestrator | 2025-06-05 19:46:57.277314 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-05 19:46:57.277319 | orchestrator | Thursday 05 June 2025 19:45:44 +0000 (0:00:00.722) 0:09:55.451 ********* 2025-06-05 19:46:57.277323 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277328 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277332 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277337 | orchestrator | 2025-06-05 19:46:57.277341 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-05 19:46:57.277346 | orchestrator | Thursday 05 June 2025 19:45:45 +0000 (0:00:00.317) 0:09:55.769 ********* 2025-06-05 19:46:57.277350 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277355 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277359 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277364 | orchestrator | 2025-06-05 19:46:57.277368 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-05 19:46:57.277373 | orchestrator | Thursday 05 June 2025 19:45:45 +0000 (0:00:00.690) 0:09:56.460 ********* 2025-06-05 19:46:57.277377 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277382 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277386 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277391 | orchestrator | 2025-06-05 19:46:57.277395 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-05 19:46:57.277400 | orchestrator | Thursday 05 June 2025 19:45:46 +0000 (0:00:00.673) 0:09:57.133 ********* 2025-06-05 19:46:57.277404 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277409 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277413 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277418 | orchestrator | 2025-06-05 19:46:57.277422 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-05 19:46:57.277427 | orchestrator | Thursday 05 June 2025 19:45:47 +0000 (0:00:00.970) 0:09:58.104 ********* 2025-06-05 19:46:57.277435 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277439 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277444 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277448 | orchestrator | 2025-06-05 19:46:57.277456 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-05 19:46:57.277461 | orchestrator | Thursday 05 June 2025 19:45:47 +0000 (0:00:00.295) 0:09:58.399 ********* 2025-06-05 19:46:57.277465 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277470 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277474 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277479 | orchestrator | 2025-06-05 19:46:57.277483 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-05 19:46:57.277488 | orchestrator | Thursday 05 June 2025 19:45:48 +0000 (0:00:00.285) 0:09:58.684 ********* 2025-06-05 19:46:57.277492 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277497 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277501 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277506 | orchestrator | 2025-06-05 19:46:57.277510 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-05 19:46:57.277515 | orchestrator | Thursday 05 June 2025 19:45:48 +0000 (0:00:00.272) 0:09:58.957 ********* 2025-06-05 19:46:57.277522 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277527 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277531 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277536 | orchestrator | 2025-06-05 19:46:57.277540 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-05 19:46:57.277545 | orchestrator | Thursday 05 June 2025 19:45:49 +0000 (0:00:00.999) 0:09:59.957 ********* 2025-06-05 19:46:57.277550 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277554 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277559 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277563 | orchestrator | 2025-06-05 19:46:57.277568 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-05 19:46:57.277572 | orchestrator | Thursday 05 June 2025 19:45:50 +0000 (0:00:00.715) 0:10:00.673 ********* 2025-06-05 19:46:57.277577 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277581 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277586 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277590 | orchestrator | 2025-06-05 19:46:57.277595 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-05 19:46:57.277599 | orchestrator | Thursday 05 June 2025 19:45:50 +0000 (0:00:00.292) 0:10:00.965 ********* 2025-06-05 19:46:57.277604 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277608 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277613 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277618 | orchestrator | 2025-06-05 19:46:57.277622 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-05 19:46:57.277627 | orchestrator | Thursday 05 June 2025 19:45:50 +0000 (0:00:00.278) 0:10:01.243 ********* 2025-06-05 19:46:57.277632 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277636 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277641 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277646 | orchestrator | 2025-06-05 19:46:57.277650 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-05 19:46:57.277655 | orchestrator | Thursday 05 June 2025 19:45:51 +0000 (0:00:00.618) 0:10:01.862 ********* 2025-06-05 19:46:57.277660 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277664 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277669 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277673 | orchestrator | 2025-06-05 19:46:57.277678 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-05 19:46:57.277682 | orchestrator | Thursday 05 June 2025 19:45:51 +0000 (0:00:00.308) 0:10:02.170 ********* 2025-06-05 19:46:57.277687 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277691 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277701 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277706 | orchestrator | 2025-06-05 19:46:57.277711 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-05 19:46:57.277715 | orchestrator | Thursday 05 June 2025 19:45:51 +0000 (0:00:00.325) 0:10:02.496 ********* 2025-06-05 19:46:57.277720 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277724 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277729 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277733 | orchestrator | 2025-06-05 19:46:57.277738 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-05 19:46:57.277742 | orchestrator | Thursday 05 June 2025 19:45:52 +0000 (0:00:00.282) 0:10:02.779 ********* 2025-06-05 19:46:57.277747 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277751 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277756 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277760 | orchestrator | 2025-06-05 19:46:57.277765 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-05 19:46:57.277769 | orchestrator | Thursday 05 June 2025 19:45:52 +0000 (0:00:00.572) 0:10:03.352 ********* 2025-06-05 19:46:57.277774 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.277778 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.277783 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.277787 | orchestrator | 2025-06-05 19:46:57.277792 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-05 19:46:57.277797 | orchestrator | Thursday 05 June 2025 19:45:53 +0000 (0:00:00.271) 0:10:03.623 ********* 2025-06-05 19:46:57.277801 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277806 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277810 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277815 | orchestrator | 2025-06-05 19:46:57.277819 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-05 19:46:57.277824 | orchestrator | Thursday 05 June 2025 19:45:53 +0000 (0:00:00.308) 0:10:03.932 ********* 2025-06-05 19:46:57.277828 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.277833 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.277837 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.277842 | orchestrator | 2025-06-05 19:46:57.277847 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-05 19:46:57.277851 | orchestrator | Thursday 05 June 2025 19:45:54 +0000 (0:00:00.727) 0:10:04.659 ********* 2025-06-05 19:46:57.277856 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.277860 | orchestrator | 2025-06-05 19:46:57.277865 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-05 19:46:57.277872 | orchestrator | Thursday 05 June 2025 19:45:54 +0000 (0:00:00.553) 0:10:05.212 ********* 2025-06-05 19:46:57.277877 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.277881 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.277886 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.277890 | orchestrator | 2025-06-05 19:46:57.277895 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-05 19:46:57.277922 | orchestrator | Thursday 05 June 2025 19:45:56 +0000 (0:00:02.020) 0:10:07.233 ********* 2025-06-05 19:46:57.277927 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 19:46:57.277932 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-05 19:46:57.277936 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.277941 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 19:46:57.277945 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-05 19:46:57.277953 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.277958 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 19:46:57.277963 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-05 19:46:57.277971 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.277976 | orchestrator | 2025-06-05 19:46:57.277980 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-05 19:46:57.277985 | orchestrator | Thursday 05 June 2025 19:45:58 +0000 (0:00:01.370) 0:10:08.603 ********* 2025-06-05 19:46:57.277989 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278050 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.278055 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.278059 | orchestrator | 2025-06-05 19:46:57.278064 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-05 19:46:57.278069 | orchestrator | Thursday 05 June 2025 19:45:58 +0000 (0:00:00.289) 0:10:08.893 ********* 2025-06-05 19:46:57.278073 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.278078 | orchestrator | 2025-06-05 19:46:57.278083 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-05 19:46:57.278087 | orchestrator | Thursday 05 June 2025 19:45:58 +0000 (0:00:00.497) 0:10:09.390 ********* 2025-06-05 19:46:57.278092 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278097 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278101 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278106 | orchestrator | 2025-06-05 19:46:57.278111 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-05 19:46:57.278115 | orchestrator | Thursday 05 June 2025 19:46:00 +0000 (0:00:01.351) 0:10:10.742 ********* 2025-06-05 19:46:57.278120 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278124 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-05 19:46:57.278129 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278133 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-05 19:46:57.278138 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278142 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-05 19:46:57.278147 | orchestrator | 2025-06-05 19:46:57.278151 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-05 19:46:57.278156 | orchestrator | Thursday 05 June 2025 19:46:05 +0000 (0:00:04.948) 0:10:15.691 ********* 2025-06-05 19:46:57.278160 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278165 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.278169 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278174 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.278178 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:46:57.278183 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:46:57.278188 | orchestrator | 2025-06-05 19:46:57.278192 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-05 19:46:57.278197 | orchestrator | Thursday 05 June 2025 19:46:07 +0000 (0:00:02.511) 0:10:18.202 ********* 2025-06-05 19:46:57.278201 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 19:46:57.278206 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.278214 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 19:46:57.278218 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.278223 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 19:46:57.278228 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.278232 | orchestrator | 2025-06-05 19:46:57.278237 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-05 19:46:57.278245 | orchestrator | Thursday 05 June 2025 19:46:08 +0000 (0:00:01.165) 0:10:19.368 ********* 2025-06-05 19:46:57.278250 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-05 19:46:57.278254 | orchestrator | 2025-06-05 19:46:57.278259 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-05 19:46:57.278263 | orchestrator | Thursday 05 June 2025 19:46:09 +0000 (0:00:00.206) 0:10:19.574 ********* 2025-06-05 19:46:57.278268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278295 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278299 | orchestrator | 2025-06-05 19:46:57.278304 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-05 19:46:57.278308 | orchestrator | Thursday 05 June 2025 19:46:10 +0000 (0:00:01.001) 0:10:20.575 ********* 2025-06-05 19:46:57.278313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278326 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278331 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-05 19:46:57.278335 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278339 | orchestrator | 2025-06-05 19:46:57.278344 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-05 19:46:57.278348 | orchestrator | Thursday 05 June 2025 19:46:10 +0000 (0:00:00.605) 0:10:21.180 ********* 2025-06-05 19:46:57.278352 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-05 19:46:57.278356 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-05 19:46:57.278360 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-05 19:46:57.278364 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-05 19:46:57.278368 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-05 19:46:57.278377 | orchestrator | 2025-06-05 19:46:57.278381 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-05 19:46:57.278385 | orchestrator | Thursday 05 June 2025 19:46:42 +0000 (0:00:31.676) 0:10:52.856 ********* 2025-06-05 19:46:57.278389 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278393 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.278397 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.278401 | orchestrator | 2025-06-05 19:46:57.278406 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-05 19:46:57.278410 | orchestrator | Thursday 05 June 2025 19:46:42 +0000 (0:00:00.350) 0:10:53.207 ********* 2025-06-05 19:46:57.278414 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278418 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.278422 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.278426 | orchestrator | 2025-06-05 19:46:57.278430 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-05 19:46:57.278434 | orchestrator | Thursday 05 June 2025 19:46:42 +0000 (0:00:00.301) 0:10:53.508 ********* 2025-06-05 19:46:57.278438 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.278443 | orchestrator | 2025-06-05 19:46:57.278447 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-05 19:46:57.278451 | orchestrator | Thursday 05 June 2025 19:46:43 +0000 (0:00:00.721) 0:10:54.230 ********* 2025-06-05 19:46:57.278455 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.278459 | orchestrator | 2025-06-05 19:46:57.278465 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-05 19:46:57.278470 | orchestrator | Thursday 05 June 2025 19:46:44 +0000 (0:00:00.514) 0:10:54.744 ********* 2025-06-05 19:46:57.278474 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.278478 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.278482 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.278486 | orchestrator | 2025-06-05 19:46:57.278490 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-05 19:46:57.278494 | orchestrator | Thursday 05 June 2025 19:46:45 +0000 (0:00:01.268) 0:10:56.013 ********* 2025-06-05 19:46:57.278498 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.278502 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.278507 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.278511 | orchestrator | 2025-06-05 19:46:57.278515 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-05 19:46:57.278519 | orchestrator | Thursday 05 June 2025 19:46:46 +0000 (0:00:01.334) 0:10:57.348 ********* 2025-06-05 19:46:57.278523 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:46:57.278529 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:46:57.278534 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:46:57.278538 | orchestrator | 2025-06-05 19:46:57.278542 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-05 19:46:57.278546 | orchestrator | Thursday 05 June 2025 19:46:48 +0000 (0:00:01.822) 0:10:59.170 ********* 2025-06-05 19:46:57.278550 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278554 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278558 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-05 19:46:57.278563 | orchestrator | 2025-06-05 19:46:57.278567 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-05 19:46:57.278576 | orchestrator | Thursday 05 June 2025 19:46:51 +0000 (0:00:02.576) 0:11:01.747 ********* 2025-06-05 19:46:57.278580 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278584 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.278588 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.278593 | orchestrator | 2025-06-05 19:46:57.278597 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-05 19:46:57.278601 | orchestrator | Thursday 05 June 2025 19:46:51 +0000 (0:00:00.318) 0:11:02.066 ********* 2025-06-05 19:46:57.278605 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:46:57.278609 | orchestrator | 2025-06-05 19:46:57.278613 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-05 19:46:57.278617 | orchestrator | Thursday 05 June 2025 19:46:52 +0000 (0:00:00.492) 0:11:02.558 ********* 2025-06-05 19:46:57.278621 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.278625 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.278629 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.278634 | orchestrator | 2025-06-05 19:46:57.278638 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-05 19:46:57.278642 | orchestrator | Thursday 05 June 2025 19:46:52 +0000 (0:00:00.517) 0:11:03.076 ********* 2025-06-05 19:46:57.278646 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278650 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:46:57.278654 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:46:57.278658 | orchestrator | 2025-06-05 19:46:57.278662 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-05 19:46:57.278666 | orchestrator | Thursday 05 June 2025 19:46:52 +0000 (0:00:00.316) 0:11:03.393 ********* 2025-06-05 19:46:57.278670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:46:57.278675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:46:57.278679 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:46:57.278683 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:46:57.278687 | orchestrator | 2025-06-05 19:46:57.278691 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-05 19:46:57.278695 | orchestrator | Thursday 05 June 2025 19:46:53 +0000 (0:00:00.580) 0:11:03.973 ********* 2025-06-05 19:46:57.278699 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:46:57.278703 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:46:57.278707 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:46:57.278711 | orchestrator | 2025-06-05 19:46:57.278716 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:46:57.278720 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-05 19:46:57.278724 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-05 19:46:57.278728 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-05 19:46:57.278732 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-05 19:46:57.278736 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-05 19:46:57.278743 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-05 19:46:57.278747 | orchestrator | 2025-06-05 19:46:57.278751 | orchestrator | 2025-06-05 19:46:57.278755 | orchestrator | 2025-06-05 19:46:57.278760 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:46:57.278767 | orchestrator | Thursday 05 June 2025 19:46:53 +0000 (0:00:00.228) 0:11:04.201 ********* 2025-06-05 19:46:57.278771 | orchestrator | =============================================================================== 2025-06-05 19:46:57.278776 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 70.00s 2025-06-05 19:46:57.278780 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 37.20s 2025-06-05 19:46:57.278784 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.62s 2025-06-05 19:46:57.278788 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.68s 2025-06-05 19:46:57.278794 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.01s 2025-06-05 19:46:57.278799 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.28s 2025-06-05 19:46:57.278803 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.26s 2025-06-05 19:46:57.278807 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.63s 2025-06-05 19:46:57.278811 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.62s 2025-06-05 19:46:57.278815 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.55s 2025-06-05 19:46:57.278819 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.23s 2025-06-05 19:46:57.278823 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.36s 2025-06-05 19:46:57.278827 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.95s 2025-06-05 19:46:57.278831 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.86s 2025-06-05 19:46:57.278835 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.67s 2025-06-05 19:46:57.278839 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.07s 2025-06-05 19:46:57.278844 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.03s 2025-06-05 19:46:57.278848 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.72s 2025-06-05 19:46:57.278852 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.66s 2025-06-05 19:46:57.278856 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.50s 2025-06-05 19:46:57.278860 | orchestrator | 2025-06-05 19:46:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:00.296535 | orchestrator | 2025-06-05 19:47:00 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:00.296662 | orchestrator | 2025-06-05 19:47:00 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:00.301047 | orchestrator | 2025-06-05 19:47:00 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:00.301067 | orchestrator | 2025-06-05 19:47:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:03.347475 | orchestrator | 2025-06-05 19:47:03 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:03.349611 | orchestrator | 2025-06-05 19:47:03 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:03.351955 | orchestrator | 2025-06-05 19:47:03 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:03.352006 | orchestrator | 2025-06-05 19:47:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:06.400711 | orchestrator | 2025-06-05 19:47:06 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:06.401613 | orchestrator | 2025-06-05 19:47:06 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:06.403921 | orchestrator | 2025-06-05 19:47:06 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:06.403997 | orchestrator | 2025-06-05 19:47:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:09.444682 | orchestrator | 2025-06-05 19:47:09 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:09.445769 | orchestrator | 2025-06-05 19:47:09 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:09.448353 | orchestrator | 2025-06-05 19:47:09 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:09.448382 | orchestrator | 2025-06-05 19:47:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:12.507610 | orchestrator | 2025-06-05 19:47:12 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:12.509353 | orchestrator | 2025-06-05 19:47:12 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:12.509388 | orchestrator | 2025-06-05 19:47:12 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:12.509400 | orchestrator | 2025-06-05 19:47:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:15.553531 | orchestrator | 2025-06-05 19:47:15 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:15.553639 | orchestrator | 2025-06-05 19:47:15 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:15.553656 | orchestrator | 2025-06-05 19:47:15 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:15.553668 | orchestrator | 2025-06-05 19:47:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:18.604210 | orchestrator | 2025-06-05 19:47:18 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:18.605505 | orchestrator | 2025-06-05 19:47:18 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:18.607578 | orchestrator | 2025-06-05 19:47:18 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:18.607699 | orchestrator | 2025-06-05 19:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:21.655881 | orchestrator | 2025-06-05 19:47:21 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:21.659426 | orchestrator | 2025-06-05 19:47:21 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:21.662234 | orchestrator | 2025-06-05 19:47:21 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:21.662281 | orchestrator | 2025-06-05 19:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:24.704995 | orchestrator | 2025-06-05 19:47:24 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state STARTED 2025-06-05 19:47:24.706765 | orchestrator | 2025-06-05 19:47:24 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state STARTED 2025-06-05 19:47:24.707495 | orchestrator | 2025-06-05 19:47:24 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:24.707858 | orchestrator | 2025-06-05 19:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:27.766929 | orchestrator | 2025-06-05 19:47:27.767326 | orchestrator | 2025-06-05 19:47:27.767350 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-05 19:47:27.767364 | orchestrator | 2025-06-05 19:47:27.767376 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-05 19:47:27.767387 | orchestrator | Thursday 05 June 2025 19:44:20 +0000 (0:00:00.108) 0:00:00.108 ********* 2025-06-05 19:47:27.767399 | orchestrator | ok: [localhost] => { 2025-06-05 19:47:27.767412 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-05 19:47:27.767448 | orchestrator | } 2025-06-05 19:47:27.767460 | orchestrator | 2025-06-05 19:47:27.767471 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-05 19:47:27.767482 | orchestrator | Thursday 05 June 2025 19:44:20 +0000 (0:00:00.053) 0:00:00.161 ********* 2025-06-05 19:47:27.767494 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-05 19:47:27.767507 | orchestrator | ...ignoring 2025-06-05 19:47:27.767518 | orchestrator | 2025-06-05 19:47:27.767529 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-05 19:47:27.767540 | orchestrator | Thursday 05 June 2025 19:44:23 +0000 (0:00:02.786) 0:00:02.948 ********* 2025-06-05 19:47:27.767551 | orchestrator | skipping: [localhost] 2025-06-05 19:47:27.767562 | orchestrator | 2025-06-05 19:47:27.767572 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-05 19:47:27.767583 | orchestrator | Thursday 05 June 2025 19:44:23 +0000 (0:00:00.055) 0:00:03.003 ********* 2025-06-05 19:47:27.767594 | orchestrator | ok: [localhost] 2025-06-05 19:47:27.767605 | orchestrator | 2025-06-05 19:47:27.767616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:47:27.767627 | orchestrator | 2025-06-05 19:47:27.767638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:47:27.767648 | orchestrator | Thursday 05 June 2025 19:44:23 +0000 (0:00:00.152) 0:00:03.155 ********* 2025-06-05 19:47:27.767659 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.767670 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.767681 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.767691 | orchestrator | 2025-06-05 19:47:27.767702 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:47:27.767713 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:00.296) 0:00:03.452 ********* 2025-06-05 19:47:27.767724 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-05 19:47:27.767735 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-05 19:47:27.767746 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-05 19:47:27.767757 | orchestrator | 2025-06-05 19:47:27.767767 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-05 19:47:27.767778 | orchestrator | 2025-06-05 19:47:27.767789 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-05 19:47:27.767800 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:00.689) 0:00:04.141 ********* 2025-06-05 19:47:27.767810 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-05 19:47:27.767822 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-05 19:47:27.767833 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-05 19:47:27.767844 | orchestrator | 2025-06-05 19:47:27.767855 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-05 19:47:27.767866 | orchestrator | Thursday 05 June 2025 19:44:25 +0000 (0:00:00.424) 0:00:04.566 ********* 2025-06-05 19:47:27.767877 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.767889 | orchestrator | 2025-06-05 19:47:27.767900 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-05 19:47:27.767913 | orchestrator | Thursday 05 June 2025 19:44:25 +0000 (0:00:00.682) 0:00:05.249 ********* 2025-06-05 19:47:27.767967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.767996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.768022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.768044 | orchestrator | 2025-06-05 19:47:27.768065 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-05 19:47:27.768079 | orchestrator | Thursday 05 June 2025 19:44:29 +0000 (0:00:03.228) 0:00:08.477 ********* 2025-06-05 19:47:27.768092 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.768145 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.768160 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.768173 | orchestrator | 2025-06-05 19:47:27.768186 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-05 19:47:27.768198 | orchestrator | Thursday 05 June 2025 19:44:29 +0000 (0:00:00.602) 0:00:09.080 ********* 2025-06-05 19:47:27.768210 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.768223 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.768235 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.768248 | orchestrator | 2025-06-05 19:47:27.768261 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-05 19:47:27.768271 | orchestrator | Thursday 05 June 2025 19:44:31 +0000 (0:00:01.511) 0:00:10.591 ********* 2025-06-05 19:47:27.768284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.768311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.768332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.768345 | orchestrator | 2025-06-05 19:47:27.768356 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-05 19:47:27.768367 | orchestrator | Thursday 05 June 2025 19:44:35 +0000 (0:00:04.128) 0:00:14.720 ********* 2025-06-05 19:47:27.768378 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.768389 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.768400 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.768418 | orchestrator | 2025-06-05 19:47:27.768429 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-05 19:47:27.768440 | orchestrator | Thursday 05 June 2025 19:44:36 +0000 (0:00:01.033) 0:00:15.754 ********* 2025-06-05 19:47:27.768451 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.768462 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.768472 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.768483 | orchestrator | 2025-06-05 19:47:27.768494 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-05 19:47:27.768510 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:03.433) 0:00:19.187 ********* 2025-06-05 19:47:27.768521 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.768624 | orchestrator | 2025-06-05 19:47:27.768636 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-05 19:47:27.768646 | orchestrator | Thursday 05 June 2025 19:44:40 +0000 (0:00:00.456) 0:00:19.644 ********* 2025-06-05 19:47:27.768668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768682 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.768694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768715 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.768741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768754 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.768765 | orchestrator | 2025-06-05 19:47:27.768776 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-05 19:47:27.768787 | orchestrator | Thursday 05 June 2025 19:44:43 +0000 (0:00:03.100) 0:00:22.744 ********* 2025-06-05 19:47:27.768799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768818 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.768843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768855 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.768867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768887 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.768898 | orchestrator | 2025-06-05 19:47:27.768909 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-05 19:47:27.768920 | orchestrator | Thursday 05 June 2025 19:44:46 +0000 (0:00:02.981) 0:00:25.726 ********* 2025-06-05 19:47:27.768948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.768982 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.768993 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.769010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-05 19:47:27.769022 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.769034 | orchestrator | 2025-06-05 19:47:27.769045 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-05 19:47:27.769056 | orchestrator | Thursday 05 June 2025 19:44:48 +0000 (0:00:02.217) 0:00:27.944 ********* 2025-06-05 19:47:27.769073 | orchestrator | ch2025-06-05 19:47:27 | INFO  | Task ed084a01-3d6b-429a-a9da-892830053970 is in state SUCCESS 2025-06-05 19:47:27.769085 | orchestrator | anged: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.769143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.769182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-05 19:47:27.769764 | orchestrator | 2025-06-05 19:47:27.769820 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-05 19:47:27.769832 | orchestrator | Thursday 05 June 2025 19:44:51 +0000 (0:00:02.487) 0:00:30.431 ********* 2025-06-05 19:47:27.769843 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.769854 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.769920 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.769936 | orchestrator | 2025-06-05 19:47:27.769947 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-05 19:47:27.769959 | orchestrator | Thursday 05 June 2025 19:44:52 +0000 (0:00:01.021) 0:00:31.453 ********* 2025-06-05 19:47:27.769969 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.769981 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.769992 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.770002 | orchestrator | 2025-06-05 19:47:27.770279 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-05 19:47:27.770303 | orchestrator | Thursday 05 June 2025 19:44:52 +0000 (0:00:00.310) 0:00:31.764 ********* 2025-06-05 19:47:27.770315 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.770324 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.770334 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.770344 | orchestrator | 2025-06-05 19:47:27.770353 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-05 19:47:27.770363 | orchestrator | Thursday 05 June 2025 19:44:52 +0000 (0:00:00.299) 0:00:32.064 ********* 2025-06-05 19:47:27.770374 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-05 19:47:27.770384 | orchestrator | ...ignoring 2025-06-05 19:47:27.770402 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-05 19:47:27.770412 | orchestrator | ...ignoring 2025-06-05 19:47:27.770422 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-05 19:47:27.770432 | orchestrator | ...ignoring 2025-06-05 19:47:27.770441 | orchestrator | 2025-06-05 19:47:27.770451 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-05 19:47:27.770461 | orchestrator | Thursday 05 June 2025 19:45:03 +0000 (0:00:10.804) 0:00:42.868 ********* 2025-06-05 19:47:27.770470 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.770480 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.770489 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.770499 | orchestrator | 2025-06-05 19:47:27.770508 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-05 19:47:27.770518 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.643) 0:00:43.512 ********* 2025-06-05 19:47:27.770528 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.770537 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.770547 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.770557 | orchestrator | 2025-06-05 19:47:27.770566 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-05 19:47:27.770576 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.420) 0:00:43.932 ********* 2025-06-05 19:47:27.770585 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.770595 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.770604 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.770614 | orchestrator | 2025-06-05 19:47:27.770635 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-05 19:47:27.770645 | orchestrator | Thursday 05 June 2025 19:45:04 +0000 (0:00:00.395) 0:00:44.328 ********* 2025-06-05 19:47:27.770655 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.770664 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.770685 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.770695 | orchestrator | 2025-06-05 19:47:27.770704 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-05 19:47:27.770714 | orchestrator | Thursday 05 June 2025 19:45:05 +0000 (0:00:00.393) 0:00:44.721 ********* 2025-06-05 19:47:27.770724 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.770733 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.770743 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.770752 | orchestrator | 2025-06-05 19:47:27.770762 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-05 19:47:27.770772 | orchestrator | Thursday 05 June 2025 19:45:05 +0000 (0:00:00.616) 0:00:45.338 ********* 2025-06-05 19:47:27.770782 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.770791 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.770831 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.770841 | orchestrator | 2025-06-05 19:47:27.770851 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-05 19:47:27.770861 | orchestrator | Thursday 05 June 2025 19:45:06 +0000 (0:00:00.453) 0:00:45.791 ********* 2025-06-05 19:47:27.770870 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.770880 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.770890 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-05 19:47:27.770900 | orchestrator | 2025-06-05 19:47:27.770911 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-05 19:47:27.770922 | orchestrator | Thursday 05 June 2025 19:45:06 +0000 (0:00:00.359) 0:00:46.151 ********* 2025-06-05 19:47:27.770933 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.770944 | orchestrator | 2025-06-05 19:47:27.770955 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-05 19:47:27.770966 | orchestrator | Thursday 05 June 2025 19:45:16 +0000 (0:00:10.053) 0:00:56.205 ********* 2025-06-05 19:47:27.770977 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.770988 | orchestrator | 2025-06-05 19:47:27.770999 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-05 19:47:27.771010 | orchestrator | Thursday 05 June 2025 19:45:16 +0000 (0:00:00.129) 0:00:56.335 ********* 2025-06-05 19:47:27.771021 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.771032 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.771043 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.771054 | orchestrator | 2025-06-05 19:47:27.771065 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-05 19:47:27.771076 | orchestrator | Thursday 05 June 2025 19:45:17 +0000 (0:00:00.944) 0:00:57.279 ********* 2025-06-05 19:47:27.771088 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.771099 | orchestrator | 2025-06-05 19:47:27.771131 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-05 19:47:27.771143 | orchestrator | Thursday 05 June 2025 19:45:25 +0000 (0:00:07.472) 0:01:04.751 ********* 2025-06-05 19:47:27.771154 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.771165 | orchestrator | 2025-06-05 19:47:27.771177 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-05 19:47:27.771188 | orchestrator | Thursday 05 June 2025 19:45:27 +0000 (0:00:01.718) 0:01:06.470 ********* 2025-06-05 19:47:27.771198 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.771209 | orchestrator | 2025-06-05 19:47:27.771220 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-05 19:47:27.771231 | orchestrator | Thursday 05 June 2025 19:45:29 +0000 (0:00:02.325) 0:01:08.795 ********* 2025-06-05 19:47:27.771253 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.771262 | orchestrator | 2025-06-05 19:47:27.771272 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-05 19:47:27.771282 | orchestrator | Thursday 05 June 2025 19:45:29 +0000 (0:00:00.110) 0:01:08.905 ********* 2025-06-05 19:47:27.771292 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.771301 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.771311 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.771320 | orchestrator | 2025-06-05 19:47:27.771330 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-05 19:47:27.771346 | orchestrator | Thursday 05 June 2025 19:45:30 +0000 (0:00:00.493) 0:01:09.399 ********* 2025-06-05 19:47:27.771356 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.771366 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-05 19:47:27.771375 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.771385 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.771394 | orchestrator | 2025-06-05 19:47:27.771404 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-05 19:47:27.771414 | orchestrator | skipping: no hosts matched 2025-06-05 19:47:27.771424 | orchestrator | 2025-06-05 19:47:27.771433 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-05 19:47:27.771443 | orchestrator | 2025-06-05 19:47:27.771453 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-05 19:47:27.771462 | orchestrator | Thursday 05 June 2025 19:45:30 +0000 (0:00:00.311) 0:01:09.710 ********* 2025-06-05 19:47:27.771472 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.771482 | orchestrator | 2025-06-05 19:47:27.771491 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-05 19:47:27.771501 | orchestrator | Thursday 05 June 2025 19:45:53 +0000 (0:00:23.233) 0:01:32.944 ********* 2025-06-05 19:47:27.771511 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.771520 | orchestrator | 2025-06-05 19:47:27.771530 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-05 19:47:27.771540 | orchestrator | Thursday 05 June 2025 19:46:09 +0000 (0:00:15.496) 0:01:48.440 ********* 2025-06-05 19:47:27.771549 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.771559 | orchestrator | 2025-06-05 19:47:27.771569 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-05 19:47:27.771578 | orchestrator | 2025-06-05 19:47:27.771588 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-05 19:47:27.771598 | orchestrator | Thursday 05 June 2025 19:46:11 +0000 (0:00:02.416) 0:01:50.857 ********* 2025-06-05 19:47:27.771607 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.771617 | orchestrator | 2025-06-05 19:47:27.771634 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-05 19:47:27.771644 | orchestrator | Thursday 05 June 2025 19:46:35 +0000 (0:00:24.401) 0:02:15.259 ********* 2025-06-05 19:47:27.771654 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.771664 | orchestrator | 2025-06-05 19:47:27.771674 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-05 19:47:27.771684 | orchestrator | Thursday 05 June 2025 19:46:51 +0000 (0:00:15.540) 0:02:30.799 ********* 2025-06-05 19:47:27.771693 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.771703 | orchestrator | 2025-06-05 19:47:27.771713 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-05 19:47:27.771722 | orchestrator | 2025-06-05 19:47:27.771732 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-05 19:47:27.771742 | orchestrator | Thursday 05 June 2025 19:46:54 +0000 (0:00:02.698) 0:02:33.498 ********* 2025-06-05 19:47:27.771752 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.771761 | orchestrator | 2025-06-05 19:47:27.771771 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-05 19:47:27.771780 | orchestrator | Thursday 05 June 2025 19:47:04 +0000 (0:00:10.608) 0:02:44.107 ********* 2025-06-05 19:47:27.771797 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.771806 | orchestrator | 2025-06-05 19:47:27.771816 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-05 19:47:27.771826 | orchestrator | Thursday 05 June 2025 19:47:09 +0000 (0:00:04.651) 0:02:48.758 ********* 2025-06-05 19:47:27.771836 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.771845 | orchestrator | 2025-06-05 19:47:27.771855 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-05 19:47:27.771865 | orchestrator | 2025-06-05 19:47:27.771875 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-05 19:47:27.771884 | orchestrator | Thursday 05 June 2025 19:47:11 +0000 (0:00:02.419) 0:02:51.177 ********* 2025-06-05 19:47:27.771894 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.771904 | orchestrator | 2025-06-05 19:47:27.771913 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-05 19:47:27.771923 | orchestrator | Thursday 05 June 2025 19:47:12 +0000 (0:00:00.508) 0:02:51.685 ********* 2025-06-05 19:47:27.771933 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.772068 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.772078 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.772088 | orchestrator | 2025-06-05 19:47:27.772098 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-05 19:47:27.772164 | orchestrator | Thursday 05 June 2025 19:47:14 +0000 (0:00:02.443) 0:02:54.129 ********* 2025-06-05 19:47:27.772175 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.772185 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.772195 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.772204 | orchestrator | 2025-06-05 19:47:27.772214 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-05 19:47:27.772224 | orchestrator | Thursday 05 June 2025 19:47:17 +0000 (0:00:02.379) 0:02:56.508 ********* 2025-06-05 19:47:27.772233 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.772243 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.772253 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.772262 | orchestrator | 2025-06-05 19:47:27.772272 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-05 19:47:27.772282 | orchestrator | Thursday 05 June 2025 19:47:19 +0000 (0:00:02.432) 0:02:58.941 ********* 2025-06-05 19:47:27.772291 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.772301 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.772311 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.772320 | orchestrator | 2025-06-05 19:47:27.772330 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-05 19:47:27.772340 | orchestrator | Thursday 05 June 2025 19:47:21 +0000 (0:00:02.264) 0:03:01.205 ********* 2025-06-05 19:47:27.772349 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.772359 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.772368 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.772378 | orchestrator | 2025-06-05 19:47:27.772394 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-05 19:47:27.772404 | orchestrator | Thursday 05 June 2025 19:47:24 +0000 (0:00:03.048) 0:03:04.254 ********* 2025-06-05 19:47:27.772413 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.772423 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.772433 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.772442 | orchestrator | 2025-06-05 19:47:27.772452 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:47:27.772462 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-05 19:47:27.772472 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-05 19:47:27.772492 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-05 19:47:27.772502 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-05 19:47:27.772512 | orchestrator | 2025-06-05 19:47:27.772521 | orchestrator | 2025-06-05 19:47:27.772531 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:47:27.772541 | orchestrator | Thursday 05 June 2025 19:47:25 +0000 (0:00:00.204) 0:03:04.459 ********* 2025-06-05 19:47:27.772550 | orchestrator | =============================================================================== 2025-06-05 19:47:27.772560 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 47.64s 2025-06-05 19:47:27.772578 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 31.04s 2025-06-05 19:47:27.772588 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.80s 2025-06-05 19:47:27.772598 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.61s 2025-06-05 19:47:27.772607 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.05s 2025-06-05 19:47:27.772617 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.47s 2025-06-05 19:47:27.772627 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.12s 2025-06-05 19:47:27.772636 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.65s 2025-06-05 19:47:27.772646 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.13s 2025-06-05 19:47:27.772656 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.43s 2025-06-05 19:47:27.772665 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.23s 2025-06-05 19:47:27.772675 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.10s 2025-06-05 19:47:27.772683 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.05s 2025-06-05 19:47:27.772691 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.98s 2025-06-05 19:47:27.772701 | orchestrator | Check MariaDB service --------------------------------------------------- 2.79s 2025-06-05 19:47:27.772710 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.49s 2025-06-05 19:47:27.772719 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.44s 2025-06-05 19:47:27.772728 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.43s 2025-06-05 19:47:27.772737 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.42s 2025-06-05 19:47:27.772747 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.38s 2025-06-05 19:47:27.772756 | orchestrator | 2025-06-05 19:47:27 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:27.772765 | orchestrator | 2025-06-05 19:47:27 | INFO  | Task b2edf5be-868f-4b11-a25e-0316fbea6c96 is in state SUCCESS 2025-06-05 19:47:27.772774 | orchestrator | 2025-06-05 19:47:27.772783 | orchestrator | 2025-06-05 19:47:27.772792 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:47:27.772801 | orchestrator | 2025-06-05 19:47:27.772810 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:47:27.772819 | orchestrator | Thursday 05 June 2025 19:44:20 +0000 (0:00:00.266) 0:00:00.266 ********* 2025-06-05 19:47:27.772828 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.772837 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:47:27.772846 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:47:27.772855 | orchestrator | 2025-06-05 19:47:27.772864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:47:27.772873 | orchestrator | Thursday 05 June 2025 19:44:21 +0000 (0:00:00.274) 0:00:00.540 ********* 2025-06-05 19:47:27.772887 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-05 19:47:27.772896 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-05 19:47:27.772905 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-05 19:47:27.772914 | orchestrator | 2025-06-05 19:47:27.772924 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-05 19:47:27.772933 | orchestrator | 2025-06-05 19:47:27.772942 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-05 19:47:27.772951 | orchestrator | Thursday 05 June 2025 19:44:21 +0000 (0:00:00.383) 0:00:00.924 ********* 2025-06-05 19:47:27.772965 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.772974 | orchestrator | 2025-06-05 19:47:27.772984 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-05 19:47:27.772993 | orchestrator | Thursday 05 June 2025 19:44:22 +0000 (0:00:00.464) 0:00:01.388 ********* 2025-06-05 19:47:27.773001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:47:27.773010 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:47:27.773020 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-05 19:47:27.773029 | orchestrator | 2025-06-05 19:47:27.773038 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-05 19:47:27.773047 | orchestrator | Thursday 05 June 2025 19:44:22 +0000 (0:00:00.642) 0:00:02.031 ********* 2025-06-05 19:47:27.773062 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773173 | orchestrator | 2025-06-05 19:47:27.773181 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-05 19:47:27.773190 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:01.707) 0:00:03.739 ********* 2025-06-05 19:47:27.773198 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.773206 | orchestrator | 2025-06-05 19:47:27.773214 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-05 19:47:27.773228 | orchestrator | Thursday 05 June 2025 19:44:24 +0000 (0:00:00.507) 0:00:04.247 ********* 2025-06-05 19:47:27.773237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773283 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773310 | orchestrator | 2025-06-05 19:47:27.773318 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-05 19:47:27.773326 | orchestrator | Thursday 05 June 2025 19:44:27 +0000 (0:00:02.757) 0:00:07.005 ********* 2025-06-05 19:47:27.773339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773362 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.773370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773392 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.773400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773430 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.773438 | orchestrator | 2025-06-05 19:47:27.773446 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-05 19:47:27.773454 | orchestrator | Thursday 05 June 2025 19:44:28 +0000 (0:00:01.160) 0:00:08.165 ********* 2025-06-05 19:47:27.773462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773484 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.773492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773521 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.773529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-05 19:47:27.773542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-05 19:47:27.773551 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.773559 | orchestrator | 2025-06-05 19:47:27.773567 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-05 19:47:27.773575 | orchestrator | Thursday 05 June 2025 19:44:29 +0000 (0:00:00.732) 0:00:08.897 ********* 2025-06-05 19:47:27.773583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773663 | orchestrator | 2025-06-05 19:47:27.773671 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-05 19:47:27.773679 | orchestrator | Thursday 05 June 2025 19:44:31 +0000 (0:00:02.431) 0:00:11.329 ********* 2025-06-05 19:47:27.773688 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.773696 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.773703 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.773711 | orchestrator | 2025-06-05 19:47:27.773719 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-05 19:47:27.773727 | orchestrator | Thursday 05 June 2025 19:44:35 +0000 (0:00:03.679) 0:00:15.008 ********* 2025-06-05 19:47:27.773735 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.773743 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.773751 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.773759 | orchestrator | 2025-06-05 19:47:27.773767 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-05 19:47:27.773775 | orchestrator | Thursday 05 June 2025 19:44:37 +0000 (0:00:01.488) 0:00:16.496 ********* 2025-06-05 19:47:27.773783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-05 19:47:27.773826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-05 19:47:27.773857 | orchestrator | 2025-06-05 19:47:27.773865 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-05 19:47:27.773874 | orchestrator | Thursday 05 June 2025 19:44:38 +0000 (0:00:01.838) 0:00:18.334 ********* 2025-06-05 19:47:27.773882 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.773890 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:47:27.773897 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:47:27.773905 | orchestrator | 2025-06-05 19:47:27.773913 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-05 19:47:27.773921 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.242) 0:00:18.577 ********* 2025-06-05 19:47:27.773935 | orchestrator | 2025-06-05 19:47:27.773943 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-05 19:47:27.773951 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.056) 0:00:18.633 ********* 2025-06-05 19:47:27.773959 | orchestrator | 2025-06-05 19:47:27.773967 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-05 19:47:27.773975 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.056) 0:00:18.690 ********* 2025-06-05 19:47:27.773983 | orchestrator | 2025-06-05 19:47:27.773991 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-05 19:47:27.773999 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.163) 0:00:18.854 ********* 2025-06-05 19:47:27.774007 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.774014 | orchestrator | 2025-06-05 19:47:27.774050 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-05 19:47:27.774063 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.186) 0:00:19.040 ********* 2025-06-05 19:47:27.774072 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:47:27.774080 | orchestrator | 2025-06-05 19:47:27.774088 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-05 19:47:27.774096 | orchestrator | Thursday 05 June 2025 19:44:39 +0000 (0:00:00.177) 0:00:19.218 ********* 2025-06-05 19:47:27.774118 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.774127 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.774135 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.774143 | orchestrator | 2025-06-05 19:47:27.774151 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-05 19:47:27.774159 | orchestrator | Thursday 05 June 2025 19:45:51 +0000 (0:01:11.365) 0:01:30.583 ********* 2025-06-05 19:47:27.774167 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.774175 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:47:27.774183 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:47:27.774191 | orchestrator | 2025-06-05 19:47:27.774199 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-05 19:47:27.774207 | orchestrator | Thursday 05 June 2025 19:47:13 +0000 (0:01:22.625) 0:02:53.209 ********* 2025-06-05 19:47:27.774215 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:47:27.774223 | orchestrator | 2025-06-05 19:47:27.774231 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-05 19:47:27.774239 | orchestrator | Thursday 05 June 2025 19:47:14 +0000 (0:00:00.685) 0:02:53.894 ********* 2025-06-05 19:47:27.774247 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.774255 | orchestrator | 2025-06-05 19:47:27.774263 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-05 19:47:27.774271 | orchestrator | Thursday 05 June 2025 19:47:17 +0000 (0:00:02.716) 0:02:56.611 ********* 2025-06-05 19:47:27.774279 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:47:27.774287 | orchestrator | 2025-06-05 19:47:27.774295 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-05 19:47:27.774302 | orchestrator | Thursday 05 June 2025 19:47:19 +0000 (0:00:02.391) 0:02:59.002 ********* 2025-06-05 19:47:27.774310 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.774318 | orchestrator | 2025-06-05 19:47:27.774326 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-05 19:47:27.774334 | orchestrator | Thursday 05 June 2025 19:47:22 +0000 (0:00:02.867) 0:03:01.870 ********* 2025-06-05 19:47:27.774342 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:47:27.774350 | orchestrator | 2025-06-05 19:47:27.774358 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:47:27.774366 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:47:27.774380 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:47:27.774389 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 19:47:27.774397 | orchestrator | 2025-06-05 19:47:27.774405 | orchestrator | 2025-06-05 19:47:27.774413 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:47:27.774421 | orchestrator | Thursday 05 June 2025 19:47:25 +0000 (0:00:02.785) 0:03:04.655 ********* 2025-06-05 19:47:27.774429 | orchestrator | =============================================================================== 2025-06-05 19:47:27.774437 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 82.63s 2025-06-05 19:47:27.774445 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.37s 2025-06-05 19:47:27.774453 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.68s 2025-06-05 19:47:27.774466 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.87s 2025-06-05 19:47:27.774474 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.79s 2025-06-05 19:47:27.774482 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.76s 2025-06-05 19:47:27.774490 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.72s 2025-06-05 19:47:27.774498 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.43s 2025-06-05 19:47:27.774505 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.39s 2025-06-05 19:47:27.774513 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.84s 2025-06-05 19:47:27.774521 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-06-05 19:47:27.774529 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.49s 2025-06-05 19:47:27.774537 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.16s 2025-06-05 19:47:27.774545 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.73s 2025-06-05 19:47:27.774553 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.69s 2025-06-05 19:47:27.774561 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.64s 2025-06-05 19:47:27.774569 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-06-05 19:47:27.774577 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.46s 2025-06-05 19:47:27.774585 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2025-06-05 19:47:27.774597 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.28s 2025-06-05 19:47:27.774605 | orchestrator | 2025-06-05 19:47:27 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:27.774613 | orchestrator | 2025-06-05 19:47:27 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:27.774621 | orchestrator | 2025-06-05 19:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:30.822322 | orchestrator | 2025-06-05 19:47:30 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:30.822434 | orchestrator | 2025-06-05 19:47:30 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:30.822751 | orchestrator | 2025-06-05 19:47:30 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:30.822776 | orchestrator | 2025-06-05 19:47:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:33.868705 | orchestrator | 2025-06-05 19:47:33 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:33.869366 | orchestrator | 2025-06-05 19:47:33 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:33.870634 | orchestrator | 2025-06-05 19:47:33 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:33.870656 | orchestrator | 2025-06-05 19:47:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:36.922811 | orchestrator | 2025-06-05 19:47:36 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:36.924191 | orchestrator | 2025-06-05 19:47:36 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:36.926980 | orchestrator | 2025-06-05 19:47:36 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:36.927282 | orchestrator | 2025-06-05 19:47:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:39.978912 | orchestrator | 2025-06-05 19:47:39 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:39.980425 | orchestrator | 2025-06-05 19:47:39 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:39.981043 | orchestrator | 2025-06-05 19:47:39 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:39.981072 | orchestrator | 2025-06-05 19:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:43.018670 | orchestrator | 2025-06-05 19:47:43 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:43.022456 | orchestrator | 2025-06-05 19:47:43 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:43.022508 | orchestrator | 2025-06-05 19:47:43 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:43.022522 | orchestrator | 2025-06-05 19:47:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:46.056933 | orchestrator | 2025-06-05 19:47:46 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:46.057015 | orchestrator | 2025-06-05 19:47:46 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:46.057678 | orchestrator | 2025-06-05 19:47:46 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:46.057699 | orchestrator | 2025-06-05 19:47:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:49.099086 | orchestrator | 2025-06-05 19:47:49 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:49.099247 | orchestrator | 2025-06-05 19:47:49 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:49.099267 | orchestrator | 2025-06-05 19:47:49 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:49.099279 | orchestrator | 2025-06-05 19:47:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:52.136717 | orchestrator | 2025-06-05 19:47:52 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:52.136841 | orchestrator | 2025-06-05 19:47:52 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:52.137356 | orchestrator | 2025-06-05 19:47:52 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:52.137683 | orchestrator | 2025-06-05 19:47:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:55.188676 | orchestrator | 2025-06-05 19:47:55 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:55.188930 | orchestrator | 2025-06-05 19:47:55 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:55.188990 | orchestrator | 2025-06-05 19:47:55 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:55.189016 | orchestrator | 2025-06-05 19:47:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:47:58.232735 | orchestrator | 2025-06-05 19:47:58 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:47:58.235340 | orchestrator | 2025-06-05 19:47:58 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:47:58.237313 | orchestrator | 2025-06-05 19:47:58 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:47:58.237553 | orchestrator | 2025-06-05 19:47:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:01.286754 | orchestrator | 2025-06-05 19:48:01 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:01.287981 | orchestrator | 2025-06-05 19:48:01 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:01.289458 | orchestrator | 2025-06-05 19:48:01 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:01.289495 | orchestrator | 2025-06-05 19:48:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:04.329145 | orchestrator | 2025-06-05 19:48:04 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:04.333201 | orchestrator | 2025-06-05 19:48:04 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:04.333301 | orchestrator | 2025-06-05 19:48:04 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:04.333316 | orchestrator | 2025-06-05 19:48:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:07.374991 | orchestrator | 2025-06-05 19:48:07 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:07.375974 | orchestrator | 2025-06-05 19:48:07 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:07.377342 | orchestrator | 2025-06-05 19:48:07 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:07.377385 | orchestrator | 2025-06-05 19:48:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:10.427487 | orchestrator | 2025-06-05 19:48:10 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:10.427575 | orchestrator | 2025-06-05 19:48:10 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:10.427586 | orchestrator | 2025-06-05 19:48:10 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:10.427594 | orchestrator | 2025-06-05 19:48:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:13.473868 | orchestrator | 2025-06-05 19:48:13 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:13.475449 | orchestrator | 2025-06-05 19:48:13 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:13.477679 | orchestrator | 2025-06-05 19:48:13 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:13.477796 | orchestrator | 2025-06-05 19:48:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:16.525201 | orchestrator | 2025-06-05 19:48:16 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:16.525333 | orchestrator | 2025-06-05 19:48:16 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:16.526357 | orchestrator | 2025-06-05 19:48:16 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:16.526679 | orchestrator | 2025-06-05 19:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:19.571338 | orchestrator | 2025-06-05 19:48:19 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:19.572792 | orchestrator | 2025-06-05 19:48:19 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:19.574153 | orchestrator | 2025-06-05 19:48:19 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:19.574181 | orchestrator | 2025-06-05 19:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:22.624253 | orchestrator | 2025-06-05 19:48:22 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:22.625982 | orchestrator | 2025-06-05 19:48:22 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:22.628363 | orchestrator | 2025-06-05 19:48:22 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:22.628991 | orchestrator | 2025-06-05 19:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:25.679599 | orchestrator | 2025-06-05 19:48:25 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:25.680016 | orchestrator | 2025-06-05 19:48:25 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:25.681379 | orchestrator | 2025-06-05 19:48:25 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:25.681414 | orchestrator | 2025-06-05 19:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:28.727937 | orchestrator | 2025-06-05 19:48:28 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:28.729956 | orchestrator | 2025-06-05 19:48:28 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:28.731630 | orchestrator | 2025-06-05 19:48:28 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:28.731648 | orchestrator | 2025-06-05 19:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:31.776806 | orchestrator | 2025-06-05 19:48:31 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:31.778465 | orchestrator | 2025-06-05 19:48:31 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:31.779700 | orchestrator | 2025-06-05 19:48:31 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:31.779912 | orchestrator | 2025-06-05 19:48:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:34.824795 | orchestrator | 2025-06-05 19:48:34 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:34.828628 | orchestrator | 2025-06-05 19:48:34 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:34.830604 | orchestrator | 2025-06-05 19:48:34 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:34.830657 | orchestrator | 2025-06-05 19:48:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:37.875502 | orchestrator | 2025-06-05 19:48:37 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:37.877476 | orchestrator | 2025-06-05 19:48:37 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:37.879279 | orchestrator | 2025-06-05 19:48:37 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:37.879399 | orchestrator | 2025-06-05 19:48:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:40.930972 | orchestrator | 2025-06-05 19:48:40 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:40.932547 | orchestrator | 2025-06-05 19:48:40 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:40.933852 | orchestrator | 2025-06-05 19:48:40 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:40.933881 | orchestrator | 2025-06-05 19:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:43.972189 | orchestrator | 2025-06-05 19:48:43 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:43.973473 | orchestrator | 2025-06-05 19:48:43 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:43.976925 | orchestrator | 2025-06-05 19:48:43 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:43.976967 | orchestrator | 2025-06-05 19:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:47.021550 | orchestrator | 2025-06-05 19:48:47 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:47.022174 | orchestrator | 2025-06-05 19:48:47 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:47.023489 | orchestrator | 2025-06-05 19:48:47 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:47.023517 | orchestrator | 2025-06-05 19:48:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:50.072161 | orchestrator | 2025-06-05 19:48:50 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:50.073591 | orchestrator | 2025-06-05 19:48:50 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:50.075158 | orchestrator | 2025-06-05 19:48:50 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:50.075267 | orchestrator | 2025-06-05 19:48:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:53.125867 | orchestrator | 2025-06-05 19:48:53 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:53.127621 | orchestrator | 2025-06-05 19:48:53 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:53.129584 | orchestrator | 2025-06-05 19:48:53 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:53.129618 | orchestrator | 2025-06-05 19:48:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:56.171338 | orchestrator | 2025-06-05 19:48:56 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:56.172096 | orchestrator | 2025-06-05 19:48:56 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:56.173709 | orchestrator | 2025-06-05 19:48:56 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:56.173736 | orchestrator | 2025-06-05 19:48:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:48:59.218405 | orchestrator | 2025-06-05 19:48:59 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:48:59.219707 | orchestrator | 2025-06-05 19:48:59 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:48:59.221786 | orchestrator | 2025-06-05 19:48:59 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:48:59.221834 | orchestrator | 2025-06-05 19:48:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:02.268048 | orchestrator | 2025-06-05 19:49:02 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:49:02.269358 | orchestrator | 2025-06-05 19:49:02 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:49:02.270908 | orchestrator | 2025-06-05 19:49:02 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:02.270950 | orchestrator | 2025-06-05 19:49:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:05.319882 | orchestrator | 2025-06-05 19:49:05 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:49:05.321136 | orchestrator | 2025-06-05 19:49:05 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:49:05.322512 | orchestrator | 2025-06-05 19:49:05 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:05.322538 | orchestrator | 2025-06-05 19:49:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:08.365402 | orchestrator | 2025-06-05 19:49:08 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:49:08.365647 | orchestrator | 2025-06-05 19:49:08 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state STARTED 2025-06-05 19:49:08.366608 | orchestrator | 2025-06-05 19:49:08 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:08.366642 | orchestrator | 2025-06-05 19:49:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:11.412326 | orchestrator | 2025-06-05 19:49:11 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:49:11.415118 | orchestrator | 2025-06-05 19:49:11 | INFO  | Task 82072b63-357c-4106-ad81-5380348ded70 is in state SUCCESS 2025-06-05 19:49:11.416496 | orchestrator | 2025-06-05 19:49:11.416533 | orchestrator | 2025-06-05 19:49:11.416546 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-05 19:49:11.416605 | orchestrator | 2025-06-05 19:49:11.416618 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-05 19:49:11.416630 | orchestrator | Thursday 05 June 2025 19:46:58 +0000 (0:00:00.531) 0:00:00.531 ********* 2025-06-05 19:49:11.416642 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:49:11.416701 | orchestrator | 2025-06-05 19:49:11.416822 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-05 19:49:11.417065 | orchestrator | Thursday 05 June 2025 19:46:58 +0000 (0:00:00.521) 0:00:01.052 ********* 2025-06-05 19:49:11.417078 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417090 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417102 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417113 | orchestrator | 2025-06-05 19:49:11.417124 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-05 19:49:11.417136 | orchestrator | Thursday 05 June 2025 19:46:59 +0000 (0:00:00.619) 0:00:01.672 ********* 2025-06-05 19:49:11.417146 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417157 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417168 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417179 | orchestrator | 2025-06-05 19:49:11.417190 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-05 19:49:11.417201 | orchestrator | Thursday 05 June 2025 19:46:59 +0000 (0:00:00.239) 0:00:01.912 ********* 2025-06-05 19:49:11.417211 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417306 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417320 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417331 | orchestrator | 2025-06-05 19:49:11.417342 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-05 19:49:11.417353 | orchestrator | Thursday 05 June 2025 19:47:00 +0000 (0:00:00.700) 0:00:02.612 ********* 2025-06-05 19:49:11.417388 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417399 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417410 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417421 | orchestrator | 2025-06-05 19:49:11.417432 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-05 19:49:11.417443 | orchestrator | Thursday 05 June 2025 19:47:00 +0000 (0:00:00.272) 0:00:02.885 ********* 2025-06-05 19:49:11.417453 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417464 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417539 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417551 | orchestrator | 2025-06-05 19:49:11.417563 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-05 19:49:11.417574 | orchestrator | Thursday 05 June 2025 19:47:00 +0000 (0:00:00.241) 0:00:03.126 ********* 2025-06-05 19:49:11.417585 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417596 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417607 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417617 | orchestrator | 2025-06-05 19:49:11.417629 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-05 19:49:11.417640 | orchestrator | Thursday 05 June 2025 19:47:01 +0000 (0:00:00.271) 0:00:03.397 ********* 2025-06-05 19:49:11.417651 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.417663 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.417674 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.417687 | orchestrator | 2025-06-05 19:49:11.417700 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-05 19:49:11.417712 | orchestrator | Thursday 05 June 2025 19:47:01 +0000 (0:00:00.360) 0:00:03.758 ********* 2025-06-05 19:49:11.417724 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417737 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417750 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417762 | orchestrator | 2025-06-05 19:49:11.417775 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-05 19:49:11.417788 | orchestrator | Thursday 05 June 2025 19:47:01 +0000 (0:00:00.244) 0:00:04.002 ********* 2025-06-05 19:49:11.417801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:49:11.417813 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:49:11.417826 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:49:11.417838 | orchestrator | 2025-06-05 19:49:11.417849 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-05 19:49:11.417860 | orchestrator | Thursday 05 June 2025 19:47:02 +0000 (0:00:00.547) 0:00:04.550 ********* 2025-06-05 19:49:11.417871 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.417882 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.417893 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.417905 | orchestrator | 2025-06-05 19:49:11.417916 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-05 19:49:11.417927 | orchestrator | Thursday 05 June 2025 19:47:02 +0000 (0:00:00.337) 0:00:04.888 ********* 2025-06-05 19:49:11.417939 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:49:11.417950 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:49:11.417974 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:49:11.417985 | orchestrator | 2025-06-05 19:49:11.417996 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-05 19:49:11.418007 | orchestrator | Thursday 05 June 2025 19:47:04 +0000 (0:00:02.039) 0:00:06.927 ********* 2025-06-05 19:49:11.418060 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-05 19:49:11.418072 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-05 19:49:11.418093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-05 19:49:11.418103 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418113 | orchestrator | 2025-06-05 19:49:11.418123 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-05 19:49:11.418144 | orchestrator | Thursday 05 June 2025 19:47:05 +0000 (0:00:00.407) 0:00:07.335 ********* 2025-06-05 19:49:11.418157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418170 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418180 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418190 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418199 | orchestrator | 2025-06-05 19:49:11.418209 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-05 19:49:11.418219 | orchestrator | Thursday 05 June 2025 19:47:05 +0000 (0:00:00.752) 0:00:08.087 ********* 2025-06-05 19:49:11.418231 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418244 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418254 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.418265 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418274 | orchestrator | 2025-06-05 19:49:11.418284 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-05 19:49:11.418294 | orchestrator | Thursday 05 June 2025 19:47:06 +0000 (0:00:00.157) 0:00:08.245 ********* 2025-06-05 19:49:11.418305 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e5d8124af8c1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-05 19:47:03.341453', 'end': '2025-06-05 19:47:03.391209', 'delta': '0:00:00.049756', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e5d8124af8c1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-05 19:49:11.418324 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f30135d76a37', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-05 19:47:04.058618', 'end': '2025-06-05 19:47:04.106325', 'delta': '0:00:00.047707', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f30135d76a37'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-05 19:49:11.418352 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b5c14b434f17', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-05 19:47:04.616650', 'end': '2025-06-05 19:47:04.653061', 'delta': '0:00:00.036411', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b5c14b434f17'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-05 19:49:11.418362 | orchestrator | 2025-06-05 19:49:11.418372 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-05 19:49:11.418382 | orchestrator | Thursday 05 June 2025 19:47:06 +0000 (0:00:00.359) 0:00:08.604 ********* 2025-06-05 19:49:11.418392 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.418402 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.418411 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.418421 | orchestrator | 2025-06-05 19:49:11.418431 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-05 19:49:11.418441 | orchestrator | Thursday 05 June 2025 19:47:06 +0000 (0:00:00.422) 0:00:09.027 ********* 2025-06-05 19:49:11.418450 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-05 19:49:11.418460 | orchestrator | 2025-06-05 19:49:11.418494 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-05 19:49:11.418504 | orchestrator | Thursday 05 June 2025 19:47:09 +0000 (0:00:02.234) 0:00:11.261 ********* 2025-06-05 19:49:11.418514 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418524 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418534 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418544 | orchestrator | 2025-06-05 19:49:11.418553 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-05 19:49:11.418563 | orchestrator | Thursday 05 June 2025 19:47:09 +0000 (0:00:00.275) 0:00:11.536 ********* 2025-06-05 19:49:11.418573 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418582 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418592 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418602 | orchestrator | 2025-06-05 19:49:11.418612 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-05 19:49:11.418621 | orchestrator | Thursday 05 June 2025 19:47:09 +0000 (0:00:00.383) 0:00:11.920 ********* 2025-06-05 19:49:11.418631 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418641 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418650 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418660 | orchestrator | 2025-06-05 19:49:11.418670 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-05 19:49:11.418680 | orchestrator | Thursday 05 June 2025 19:47:10 +0000 (0:00:00.450) 0:00:12.370 ********* 2025-06-05 19:49:11.418690 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.418699 | orchestrator | 2025-06-05 19:49:11.418709 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-05 19:49:11.418719 | orchestrator | Thursday 05 June 2025 19:47:10 +0000 (0:00:00.122) 0:00:12.493 ********* 2025-06-05 19:49:11.418735 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418745 | orchestrator | 2025-06-05 19:49:11.418755 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-05 19:49:11.418764 | orchestrator | Thursday 05 June 2025 19:47:10 +0000 (0:00:00.217) 0:00:12.710 ********* 2025-06-05 19:49:11.418774 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418784 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418794 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418803 | orchestrator | 2025-06-05 19:49:11.418813 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-05 19:49:11.418823 | orchestrator | Thursday 05 June 2025 19:47:10 +0000 (0:00:00.264) 0:00:12.974 ********* 2025-06-05 19:49:11.418832 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418842 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418852 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418861 | orchestrator | 2025-06-05 19:49:11.418871 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-05 19:49:11.418881 | orchestrator | Thursday 05 June 2025 19:47:11 +0000 (0:00:00.289) 0:00:13.264 ********* 2025-06-05 19:49:11.418890 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418900 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418910 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418925 | orchestrator | 2025-06-05 19:49:11.418939 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-05 19:49:11.418956 | orchestrator | Thursday 05 June 2025 19:47:11 +0000 (0:00:00.451) 0:00:13.716 ********* 2025-06-05 19:49:11.418966 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.418976 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.418985 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.418995 | orchestrator | 2025-06-05 19:49:11.419009 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-05 19:49:11.419019 | orchestrator | Thursday 05 June 2025 19:47:11 +0000 (0:00:00.297) 0:00:14.014 ********* 2025-06-05 19:49:11.419029 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.419039 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.419048 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.419058 | orchestrator | 2025-06-05 19:49:11.419067 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-05 19:49:11.419077 | orchestrator | Thursday 05 June 2025 19:47:12 +0000 (0:00:00.279) 0:00:14.293 ********* 2025-06-05 19:49:11.419087 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.419096 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.419106 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.419116 | orchestrator | 2025-06-05 19:49:11.419126 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-05 19:49:11.419141 | orchestrator | Thursday 05 June 2025 19:47:12 +0000 (0:00:00.308) 0:00:14.602 ********* 2025-06-05 19:49:11.419151 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.419161 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.419170 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.419180 | orchestrator | 2025-06-05 19:49:11.419190 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-05 19:49:11.419199 | orchestrator | Thursday 05 June 2025 19:47:12 +0000 (0:00:00.441) 0:00:15.044 ********* 2025-06-05 19:49:11.419210 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a', 'dm-uuid-LVM-FJ6uKlHNSGbth2KF4rcsOp5SwCqZZXFqbp7EUNPk6nYvKjRXReNrgdrbcAUP75wR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf', 'dm-uuid-LVM-usj4PKAAo7bOTqcq2VpJyf3PYfNjK0vPdWUiYN0Pt9egly7bK34oCoFUCm1EK2VC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98', 'dm-uuid-LVM-01YdIcfs11p9JZGMgWn4H0UfDM053J43W4fFVzwIIS33LHdgeBRcb5dnpEhaGTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s5weHx-Yzfl-woeC-3VoH-1mHe-YyQa-EkSXvM', 'scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312', 'scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mh3H5D-yLz2-Hszp-mjP8-JYsP-T18X-vrUN4o', 'scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441', 'scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91', 'dm-uuid-LVM-rvVAKQOoNa1LDt85v5BCuQy3xGeCyndFeELQHQeC95k9Fy3dyt3JCS3tvdhusiwj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419453 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766', 'scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419494 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419572 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vdHksL-63Rz-dpTc-XuGp-uG8Q-nqpa-Y1fNNT', 'scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25', 'scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419639 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dL8QL9-QnSn-0KK1-A11Q-6XKs-ilZ5-3a2xD2', 'scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f', 'scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419650 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2', 'scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419670 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.419680 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.419694 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58', 'dm-uuid-LVM-AiQgXeMLqwZPZJwmmYyGvDG90hw0rDujSvclsUp4cC2cb5gI9Wp0oYIdoOTdvnOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5', 'dm-uuid-LVM-gW9yOshmp7eJcBlMCjxQnlZcdEM46DH6RosfnoVrZ7wDAoSEBYV30R3YJoU72UMm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419798 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-05 19:49:11.419821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TIo4v0-W8eY-u89J-pG5y-1hqb-Dcid-h2BAEN', 'scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8', 'scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419849 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6njWn-NxbI-PHJ9-m1Zl-AMQA-1JTZ-JrFVPO', 'scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e', 'scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419864 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b', 'scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419886 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-05 19:49:11.419897 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.419907 | orchestrator | 2025-06-05 19:49:11.419917 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-05 19:49:11.419927 | orchestrator | Thursday 05 June 2025 19:47:13 +0000 (0:00:00.562) 0:00:15.607 ********* 2025-06-05 19:49:11.419937 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a', 'dm-uuid-LVM-FJ6uKlHNSGbth2KF4rcsOp5SwCqZZXFqbp7EUNPk6nYvKjRXReNrgdrbcAUP75wR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.419949 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf', 'dm-uuid-LVM-usj4PKAAo7bOTqcq2VpJyf3PYfNjK0vPdWUiYN0Pt9egly7bK34oCoFUCm1EK2VC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.419959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.419970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.419984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420028 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420038 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98', 'dm-uuid-LVM-01YdIcfs11p9JZGMgWn4H0UfDM053J43W4fFVzwIIS33LHdgeBRcb5dnpEhaGTMD'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420086 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91', 'dm-uuid-LVM-rvVAKQOoNa1LDt85v5BCuQy3xGeCyndFeELQHQeC95k9Fy3dyt3JCS3tvdhusiwj'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420098 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16', 'scsi-SQEMU_QEMU_HARDDISK_cfa42d43-a7d6-4bf7-99bb-aae9db75ee30-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420109 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420130 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f5969faa--081d--5d9e--9303--7a3301cb4b7a-osd--block--f5969faa--081d--5d9e--9303--7a3301cb4b7a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s5weHx-Yzfl-woeC-3VoH-1mHe-YyQa-EkSXvM', 'scsi-0QEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312', 'scsi-SQEMU_QEMU_HARDDISK_cc2778cf-ee73-4e7c-8a8d-1e7ee0f14312'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420158 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--46c2c746--0272--5326--baff--0a3e04c6e4bf-osd--block--46c2c746--0272--5326--baff--0a3e04c6e4bf'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Mh3H5D-yLz2-Hszp-mjP8-JYsP-T18X-vrUN4o', 'scsi-0QEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441', 'scsi-SQEMU_QEMU_HARDDISK_4472eb6b-1c6e-42f9-be0b-d37693300441'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766', 'scsi-SQEMU_QEMU_HARDDISK_9365a1ca-de8d-4d50-b195-b3372d88a766'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-22-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420234 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420245 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.420255 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420265 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420276 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420286 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16', 'scsi-SQEMU_QEMU_HARDDISK_77274211-aee3-4072-87ff-8de0b78784a9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420328 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--9f7f7c2a--d649--5a85--84b6--7657bf908d98-osd--block--9f7f7c2a--d649--5a85--84b6--7657bf908d98'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vdHksL-63Rz-dpTc-XuGp-uG8Q-nqpa-Y1fNNT', 'scsi-0QEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25', 'scsi-SQEMU_QEMU_HARDDISK_50a4d034-c5f0-4330-a7d8-ab894b1f0c25'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420339 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--67c48ddb--095b--5044--89f7--89f2250f1a91-osd--block--67c48ddb--095b--5044--89f7--89f2250f1a91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dL8QL9-QnSn-0KK1-A11Q-6XKs-ilZ5-3a2xD2', 'scsi-0QEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f', 'scsi-SQEMU_QEMU_HARDDISK_da89fb13-3694-40ae-a272-70fb90f4e55f'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420359 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2', 'scsi-SQEMU_QEMU_HARDDISK_10a1977a-d4e6-4a8b-a76c-bb8b1466bde2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420794 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58', 'dm-uuid-LVM-AiQgXeMLqwZPZJwmmYyGvDG90hw0rDujSvclsUp4cC2cb5gI9Wp0oYIdoOTdvnOL'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420819 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-20-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420828 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5', 'dm-uuid-LVM-gW9yOshmp7eJcBlMCjxQnlZcdEM46DH6RosfnoVrZ7wDAoSEBYV30R3YJoU72UMm'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420836 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.420845 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420863 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420878 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420894 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420919 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420928 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16', 'scsi-SQEMU_QEMU_HARDDISK_d77cc427-936e-41af-8b88-c14019752c42-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420963 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8d24cd11--dfc5--563c--af80--3beb61f8ef58-osd--block--8d24cd11--dfc5--563c--af80--3beb61f8ef58'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TIo4v0-W8eY-u89J-pG5y-1hqb-Dcid-h2BAEN', 'scsi-0QEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8', 'scsi-SQEMU_QEMU_HARDDISK_cf03b960-33f8-4fd5-8bea-a02272b072d8'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--afd5871a--1fd2--5e8b--989c--517ad42902e5-osd--block--afd5871a--1fd2--5e8b--989c--517ad42902e5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-s6njWn-NxbI-PHJ9-m1Zl-AMQA-1JTZ-JrFVPO', 'scsi-0QEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e', 'scsi-SQEMU_QEMU_HARDDISK_648969e3-6dd4-4b8b-ace0-3e999cf7526e'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.420990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b', 'scsi-SQEMU_QEMU_HARDDISK_24c03cc2-b2a5-4cf8-8852-1f4dda86236b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.421004 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-05-18-58-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-05 19:49:11.421013 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421021 | orchestrator | 2025-06-05 19:49:11.421029 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-05 19:49:11.421038 | orchestrator | Thursday 05 June 2025 19:47:14 +0000 (0:00:00.594) 0:00:16.202 ********* 2025-06-05 19:49:11.421046 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.421054 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.421062 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.421070 | orchestrator | 2025-06-05 19:49:11.421078 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-05 19:49:11.421086 | orchestrator | Thursday 05 June 2025 19:47:14 +0000 (0:00:00.733) 0:00:16.935 ********* 2025-06-05 19:49:11.421094 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.421102 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.421110 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.421118 | orchestrator | 2025-06-05 19:49:11.421126 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-05 19:49:11.421134 | orchestrator | Thursday 05 June 2025 19:47:15 +0000 (0:00:00.441) 0:00:17.376 ********* 2025-06-05 19:49:11.421142 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.421150 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.421158 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.421165 | orchestrator | 2025-06-05 19:49:11.421173 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-05 19:49:11.421181 | orchestrator | Thursday 05 June 2025 19:47:16 +0000 (0:00:01.515) 0:00:18.892 ********* 2025-06-05 19:49:11.421189 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421197 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421205 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421213 | orchestrator | 2025-06-05 19:49:11.421227 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-05 19:49:11.421235 | orchestrator | Thursday 05 June 2025 19:47:17 +0000 (0:00:00.268) 0:00:19.161 ********* 2025-06-05 19:49:11.421243 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421251 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421259 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421266 | orchestrator | 2025-06-05 19:49:11.421274 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-05 19:49:11.421282 | orchestrator | Thursday 05 June 2025 19:47:17 +0000 (0:00:00.390) 0:00:19.552 ********* 2025-06-05 19:49:11.421290 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421298 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421306 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421314 | orchestrator | 2025-06-05 19:49:11.421322 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-05 19:49:11.421330 | orchestrator | Thursday 05 June 2025 19:47:17 +0000 (0:00:00.476) 0:00:20.028 ********* 2025-06-05 19:49:11.421338 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-05 19:49:11.421346 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-05 19:49:11.421354 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-05 19:49:11.421362 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-05 19:49:11.421370 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-05 19:49:11.421378 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-05 19:49:11.421385 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-05 19:49:11.421393 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-05 19:49:11.421401 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-05 19:49:11.421411 | orchestrator | 2025-06-05 19:49:11.421421 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-05 19:49:11.421430 | orchestrator | Thursday 05 June 2025 19:47:18 +0000 (0:00:00.789) 0:00:20.817 ********* 2025-06-05 19:49:11.421439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-05 19:49:11.421448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-05 19:49:11.421458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-05 19:49:11.421489 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421499 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-05 19:49:11.421508 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-05 19:49:11.421517 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-05 19:49:11.421526 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-05 19:49:11.421544 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-05 19:49:11.421553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-05 19:49:11.421562 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421571 | orchestrator | 2025-06-05 19:49:11.421584 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-05 19:49:11.421594 | orchestrator | Thursday 05 June 2025 19:47:18 +0000 (0:00:00.313) 0:00:21.130 ********* 2025-06-05 19:49:11.421603 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:49:11.421612 | orchestrator | 2025-06-05 19:49:11.421621 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-05 19:49:11.421632 | orchestrator | Thursday 05 June 2025 19:47:19 +0000 (0:00:00.674) 0:00:21.805 ********* 2025-06-05 19:49:11.421641 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421649 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421658 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421673 | orchestrator | 2025-06-05 19:49:11.421687 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-05 19:49:11.421696 | orchestrator | Thursday 05 June 2025 19:47:19 +0000 (0:00:00.312) 0:00:22.117 ********* 2025-06-05 19:49:11.421705 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421714 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421723 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421732 | orchestrator | 2025-06-05 19:49:11.421741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-05 19:49:11.421751 | orchestrator | Thursday 05 June 2025 19:47:20 +0000 (0:00:00.301) 0:00:22.419 ********* 2025-06-05 19:49:11.421759 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421767 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.421775 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:49:11.421783 | orchestrator | 2025-06-05 19:49:11.421791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-05 19:49:11.421799 | orchestrator | Thursday 05 June 2025 19:47:20 +0000 (0:00:00.298) 0:00:22.717 ********* 2025-06-05 19:49:11.421807 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.421815 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.421823 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.421831 | orchestrator | 2025-06-05 19:49:11.421839 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-05 19:49:11.421847 | orchestrator | Thursday 05 June 2025 19:47:21 +0000 (0:00:00.572) 0:00:23.290 ********* 2025-06-05 19:49:11.421855 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:49:11.421863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:49:11.421870 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:49:11.421878 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421886 | orchestrator | 2025-06-05 19:49:11.421894 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-05 19:49:11.421902 | orchestrator | Thursday 05 June 2025 19:47:21 +0000 (0:00:00.341) 0:00:23.632 ********* 2025-06-05 19:49:11.421910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:49:11.421918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:49:11.421926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:49:11.421934 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421942 | orchestrator | 2025-06-05 19:49:11.421950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-05 19:49:11.421958 | orchestrator | Thursday 05 June 2025 19:47:21 +0000 (0:00:00.341) 0:00:23.973 ********* 2025-06-05 19:49:11.421965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-05 19:49:11.421973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-05 19:49:11.421981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-05 19:49:11.421990 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.421997 | orchestrator | 2025-06-05 19:49:11.422005 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-05 19:49:11.422013 | orchestrator | Thursday 05 June 2025 19:47:22 +0000 (0:00:00.346) 0:00:24.320 ********* 2025-06-05 19:49:11.422050 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:49:11.422058 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:49:11.422066 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:49:11.422074 | orchestrator | 2025-06-05 19:49:11.422082 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-05 19:49:11.422090 | orchestrator | Thursday 05 June 2025 19:47:22 +0000 (0:00:00.305) 0:00:24.626 ********* 2025-06-05 19:49:11.422098 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-05 19:49:11.422106 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-05 19:49:11.422114 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-05 19:49:11.422122 | orchestrator | 2025-06-05 19:49:11.422130 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-05 19:49:11.422143 | orchestrator | Thursday 05 June 2025 19:47:23 +0000 (0:00:00.603) 0:00:25.229 ********* 2025-06-05 19:49:11.422151 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:49:11.422159 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:49:11.422167 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:49:11.422175 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-05 19:49:11.422183 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-05 19:49:11.422191 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-05 19:49:11.422199 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-05 19:49:11.422207 | orchestrator | 2025-06-05 19:49:11.422215 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-05 19:49:11.422223 | orchestrator | Thursday 05 June 2025 19:47:24 +0000 (0:00:00.951) 0:00:26.181 ********* 2025-06-05 19:49:11.422239 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-05 19:49:11.422247 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-05 19:49:11.422255 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-05 19:49:11.422263 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-05 19:49:11.422271 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-05 19:49:11.422279 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-05 19:49:11.422287 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-05 19:49:11.422295 | orchestrator | 2025-06-05 19:49:11.422307 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-05 19:49:11.422315 | orchestrator | Thursday 05 June 2025 19:47:25 +0000 (0:00:01.915) 0:00:28.096 ********* 2025-06-05 19:49:11.422323 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:49:11.422331 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:49:11.422339 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-05 19:49:11.422347 | orchestrator | 2025-06-05 19:49:11.422355 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-05 19:49:11.422363 | orchestrator | Thursday 05 June 2025 19:47:26 +0000 (0:00:00.388) 0:00:28.485 ********* 2025-06-05 19:49:11.422371 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:49:11.422381 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:49:11.422389 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:49:11.422397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:49:11.422450 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-05 19:49:11.422459 | orchestrator | 2025-06-05 19:49:11.422488 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-05 19:49:11.422498 | orchestrator | Thursday 05 June 2025 19:48:13 +0000 (0:00:47.007) 0:01:15.493 ********* 2025-06-05 19:49:11.422506 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422514 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422522 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422530 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422538 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422546 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422554 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-05 19:49:11.422562 | orchestrator | 2025-06-05 19:49:11.422570 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-05 19:49:11.422578 | orchestrator | Thursday 05 June 2025 19:48:39 +0000 (0:00:25.829) 0:01:41.322 ********* 2025-06-05 19:49:11.422586 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422594 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422602 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422610 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422618 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422625 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422633 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-05 19:49:11.422641 | orchestrator | 2025-06-05 19:49:11.422649 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-05 19:49:11.422673 | orchestrator | Thursday 05 June 2025 19:48:51 +0000 (0:00:12.319) 0:01:53.642 ********* 2025-06-05 19:49:11.422682 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422690 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422698 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422705 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422713 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422721 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422734 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422742 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422750 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422766 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422774 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422782 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422796 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422804 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422812 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-05 19:49:11.422820 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-05 19:49:11.422827 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-05 19:49:11.422835 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-05 19:49:11.422843 | orchestrator | 2025-06-05 19:49:11.422851 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:49:11.422859 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-05 19:49:11.422869 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-05 19:49:11.422877 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-05 19:49:11.422885 | orchestrator | 2025-06-05 19:49:11.422893 | orchestrator | 2025-06-05 19:49:11.422901 | orchestrator | 2025-06-05 19:49:11.422909 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:49:11.422917 | orchestrator | Thursday 05 June 2025 19:49:08 +0000 (0:00:17.439) 0:02:11.082 ********* 2025-06-05 19:49:11.422924 | orchestrator | =============================================================================== 2025-06-05 19:49:11.422932 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.01s 2025-06-05 19:49:11.422940 | orchestrator | generate keys ---------------------------------------------------------- 25.83s 2025-06-05 19:49:11.422948 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.44s 2025-06-05 19:49:11.422956 | orchestrator | get keys from monitors ------------------------------------------------- 12.32s 2025-06-05 19:49:11.422963 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.23s 2025-06-05 19:49:11.422971 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.04s 2025-06-05 19:49:11.422979 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.92s 2025-06-05 19:49:11.422987 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.52s 2025-06-05 19:49:11.422995 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2025-06-05 19:49:11.423003 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.79s 2025-06-05 19:49:11.423011 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.75s 2025-06-05 19:49:11.423018 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.73s 2025-06-05 19:49:11.423026 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.70s 2025-06-05 19:49:11.423034 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-06-05 19:49:11.423042 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.62s 2025-06-05 19:49:11.423050 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.60s 2025-06-05 19:49:11.423057 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.59s 2025-06-05 19:49:11.423065 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.57s 2025-06-05 19:49:11.423073 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-06-05 19:49:11.423081 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.55s 2025-06-05 19:49:11.423093 | orchestrator | 2025-06-05 19:49:11 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:11.423106 | orchestrator | 2025-06-05 19:49:11 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:11.423114 | orchestrator | 2025-06-05 19:49:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:14.467465 | orchestrator | 2025-06-05 19:49:14 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state STARTED 2025-06-05 19:49:14.470344 | orchestrator | 2025-06-05 19:49:14 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:14.472063 | orchestrator | 2025-06-05 19:49:14 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:14.472157 | orchestrator | 2025-06-05 19:49:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:17.515372 | orchestrator | 2025-06-05 19:49:17 | INFO  | Task dbc0a3bb-0131-4ed1-a4c4-92c908d84c37 is in state SUCCESS 2025-06-05 19:49:17.516306 | orchestrator | 2025-06-05 19:49:17.516345 | orchestrator | 2025-06-05 19:49:17.516353 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:49:17.516361 | orchestrator | 2025-06-05 19:49:17.516367 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:49:17.516374 | orchestrator | Thursday 05 June 2025 19:47:29 +0000 (0:00:00.244) 0:00:00.244 ********* 2025-06-05 19:49:17.516380 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.516388 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.516394 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.516400 | orchestrator | 2025-06-05 19:49:17.516405 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:49:17.516412 | orchestrator | Thursday 05 June 2025 19:47:29 +0000 (0:00:00.271) 0:00:00.516 ********* 2025-06-05 19:49:17.516419 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-05 19:49:17.516425 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-05 19:49:17.516431 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-05 19:49:17.516438 | orchestrator | 2025-06-05 19:49:17.516445 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-05 19:49:17.516451 | orchestrator | 2025-06-05 19:49:17.516457 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-05 19:49:17.516463 | orchestrator | Thursday 05 June 2025 19:47:30 +0000 (0:00:00.378) 0:00:00.895 ********* 2025-06-05 19:49:17.516470 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:49:17.516478 | orchestrator | 2025-06-05 19:49:17.516511 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-05 19:49:17.516518 | orchestrator | Thursday 05 June 2025 19:47:30 +0000 (0:00:00.458) 0:00:01.353 ********* 2025-06-05 19:49:17.516546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.516590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.516649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.516665 | orchestrator | 2025-06-05 19:49:17.516671 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-05 19:49:17.516721 | orchestrator | Thursday 05 June 2025 19:47:31 +0000 (0:00:01.007) 0:00:02.361 ********* 2025-06-05 19:49:17.516875 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.516883 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.516889 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.516895 | orchestrator | 2025-06-05 19:49:17.516901 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-05 19:49:17.516907 | orchestrator | Thursday 05 June 2025 19:47:31 +0000 (0:00:00.422) 0:00:02.783 ********* 2025-06-05 19:49:17.516920 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-05 19:49:17.516927 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-05 19:49:17.516933 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-05 19:49:17.516939 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-05 19:49:17.516945 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-05 19:49:17.516951 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-05 19:49:17.516957 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-05 19:49:17.516963 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-05 19:49:17.516969 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-05 19:49:17.516974 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-05 19:49:17.516980 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-05 19:49:17.516986 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-05 19:49:17.516992 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-05 19:49:17.516997 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-05 19:49:17.517003 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-05 19:49:17.517009 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-05 19:49:17.517015 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-05 19:49:17.517021 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-05 19:49:17.517027 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-05 19:49:17.517041 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-05 19:49:17.517047 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-05 19:49:17.517053 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-05 19:49:17.517059 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-05 19:49:17.517064 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-05 19:49:17.517071 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-05 19:49:17.517078 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-05 19:49:17.517085 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-05 19:49:17.517091 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-05 19:49:17.517096 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-05 19:49:17.517102 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-05 19:49:17.517108 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-05 19:49:17.517119 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-05 19:49:17.517125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-05 19:49:17.517132 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-05 19:49:17.517138 | orchestrator | 2025-06-05 19:49:17.517144 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517150 | orchestrator | Thursday 05 June 2025 19:47:32 +0000 (0:00:00.702) 0:00:03.486 ********* 2025-06-05 19:49:17.517156 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517162 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517167 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517173 | orchestrator | 2025-06-05 19:49:17.517179 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517185 | orchestrator | Thursday 05 June 2025 19:47:32 +0000 (0:00:00.294) 0:00:03.780 ********* 2025-06-05 19:49:17.517195 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517203 | orchestrator | 2025-06-05 19:49:17.517209 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517216 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.105) 0:00:03.886 ********* 2025-06-05 19:49:17.517222 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517228 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517233 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517239 | orchestrator | 2025-06-05 19:49:17.517244 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517249 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.447) 0:00:04.333 ********* 2025-06-05 19:49:17.517255 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517265 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517270 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517276 | orchestrator | 2025-06-05 19:49:17.517282 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517287 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.291) 0:00:04.625 ********* 2025-06-05 19:49:17.517292 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517298 | orchestrator | 2025-06-05 19:49:17.517303 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517309 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.130) 0:00:04.756 ********* 2025-06-05 19:49:17.517314 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517319 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517325 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517331 | orchestrator | 2025-06-05 19:49:17.517336 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517341 | orchestrator | Thursday 05 June 2025 19:47:34 +0000 (0:00:00.270) 0:00:05.027 ********* 2025-06-05 19:49:17.517347 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517353 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517358 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517363 | orchestrator | 2025-06-05 19:49:17.517369 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517374 | orchestrator | Thursday 05 June 2025 19:47:34 +0000 (0:00:00.287) 0:00:05.314 ********* 2025-06-05 19:49:17.517380 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517386 | orchestrator | 2025-06-05 19:49:17.517392 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517398 | orchestrator | Thursday 05 June 2025 19:47:34 +0000 (0:00:00.307) 0:00:05.622 ********* 2025-06-05 19:49:17.517404 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517409 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517415 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517421 | orchestrator | 2025-06-05 19:49:17.517426 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517432 | orchestrator | Thursday 05 June 2025 19:47:35 +0000 (0:00:00.274) 0:00:05.897 ********* 2025-06-05 19:49:17.517438 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517444 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517450 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517455 | orchestrator | 2025-06-05 19:49:17.517461 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517467 | orchestrator | Thursday 05 June 2025 19:47:35 +0000 (0:00:00.323) 0:00:06.220 ********* 2025-06-05 19:49:17.517473 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517478 | orchestrator | 2025-06-05 19:49:17.517569 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517580 | orchestrator | Thursday 05 June 2025 19:47:35 +0000 (0:00:00.128) 0:00:06.349 ********* 2025-06-05 19:49:17.517585 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517592 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517598 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517604 | orchestrator | 2025-06-05 19:49:17.517610 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517617 | orchestrator | Thursday 05 June 2025 19:47:35 +0000 (0:00:00.273) 0:00:06.622 ********* 2025-06-05 19:49:17.517622 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517629 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517634 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517641 | orchestrator | 2025-06-05 19:49:17.517647 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517653 | orchestrator | Thursday 05 June 2025 19:47:36 +0000 (0:00:00.493) 0:00:07.116 ********* 2025-06-05 19:49:17.517659 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517664 | orchestrator | 2025-06-05 19:49:17.517676 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517761 | orchestrator | Thursday 05 June 2025 19:47:36 +0000 (0:00:00.116) 0:00:07.232 ********* 2025-06-05 19:49:17.517768 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517774 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517785 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517791 | orchestrator | 2025-06-05 19:49:17.517797 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517803 | orchestrator | Thursday 05 June 2025 19:47:36 +0000 (0:00:00.278) 0:00:07.510 ********* 2025-06-05 19:49:17.517809 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517815 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517820 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517826 | orchestrator | 2025-06-05 19:49:17.517833 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517839 | orchestrator | Thursday 05 June 2025 19:47:36 +0000 (0:00:00.282) 0:00:07.793 ********* 2025-06-05 19:49:17.517845 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517852 | orchestrator | 2025-06-05 19:49:17.517860 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517867 | orchestrator | Thursday 05 June 2025 19:47:37 +0000 (0:00:00.133) 0:00:07.927 ********* 2025-06-05 19:49:17.517876 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517884 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.517892 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.517900 | orchestrator | 2025-06-05 19:49:17.517908 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.517923 | orchestrator | Thursday 05 June 2025 19:47:37 +0000 (0:00:00.428) 0:00:08.355 ********* 2025-06-05 19:49:17.517929 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.517935 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.517941 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.517947 | orchestrator | 2025-06-05 19:49:17.517953 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.517959 | orchestrator | Thursday 05 June 2025 19:47:37 +0000 (0:00:00.328) 0:00:08.684 ********* 2025-06-05 19:49:17.517965 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517971 | orchestrator | 2025-06-05 19:49:17.517977 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.517983 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:00.143) 0:00:08.828 ********* 2025-06-05 19:49:17.517988 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.517994 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518000 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518006 | orchestrator | 2025-06-05 19:49:17.518012 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.518060 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:00.274) 0:00:09.102 ********* 2025-06-05 19:49:17.518067 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.518073 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.518080 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.518086 | orchestrator | 2025-06-05 19:49:17.518093 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.518099 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:00.294) 0:00:09.397 ********* 2025-06-05 19:49:17.518106 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518113 | orchestrator | 2025-06-05 19:49:17.518119 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.518126 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:00.111) 0:00:09.509 ********* 2025-06-05 19:49:17.518133 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518139 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518145 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518151 | orchestrator | 2025-06-05 19:49:17.518158 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.518207 | orchestrator | Thursday 05 June 2025 19:47:39 +0000 (0:00:00.556) 0:00:10.065 ********* 2025-06-05 19:49:17.518214 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.518221 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.518227 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.518234 | orchestrator | 2025-06-05 19:49:17.518241 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.518248 | orchestrator | Thursday 05 June 2025 19:47:39 +0000 (0:00:00.317) 0:00:10.382 ********* 2025-06-05 19:49:17.518255 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518261 | orchestrator | 2025-06-05 19:49:17.518268 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.518275 | orchestrator | Thursday 05 June 2025 19:47:39 +0000 (0:00:00.118) 0:00:10.501 ********* 2025-06-05 19:49:17.518281 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518288 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518296 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518303 | orchestrator | 2025-06-05 19:49:17.518309 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-05 19:49:17.518316 | orchestrator | Thursday 05 June 2025 19:47:39 +0000 (0:00:00.266) 0:00:10.767 ********* 2025-06-05 19:49:17.518322 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:49:17.518328 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:49:17.518334 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:49:17.518340 | orchestrator | 2025-06-05 19:49:17.518347 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-05 19:49:17.518353 | orchestrator | Thursday 05 June 2025 19:47:40 +0000 (0:00:00.460) 0:00:11.227 ********* 2025-06-05 19:49:17.518360 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518366 | orchestrator | 2025-06-05 19:49:17.518373 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-05 19:49:17.518380 | orchestrator | Thursday 05 June 2025 19:47:40 +0000 (0:00:00.164) 0:00:11.392 ********* 2025-06-05 19:49:17.518386 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518393 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518400 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518407 | orchestrator | 2025-06-05 19:49:17.518413 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-05 19:49:17.518421 | orchestrator | Thursday 05 June 2025 19:47:40 +0000 (0:00:00.266) 0:00:11.658 ********* 2025-06-05 19:49:17.518427 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:49:17.518434 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:49:17.518441 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:49:17.518449 | orchestrator | 2025-06-05 19:49:17.518456 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-05 19:49:17.518468 | orchestrator | Thursday 05 June 2025 19:47:42 +0000 (0:00:01.559) 0:00:13.218 ********* 2025-06-05 19:49:17.518475 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-05 19:49:17.518481 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-05 19:49:17.518505 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-05 19:49:17.518512 | orchestrator | 2025-06-05 19:49:17.518519 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-05 19:49:17.518526 | orchestrator | Thursday 05 June 2025 19:47:44 +0000 (0:00:01.746) 0:00:14.964 ********* 2025-06-05 19:49:17.518533 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-05 19:49:17.518540 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-05 19:49:17.518547 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-05 19:49:17.518554 | orchestrator | 2025-06-05 19:49:17.518575 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-05 19:49:17.518583 | orchestrator | Thursday 05 June 2025 19:47:46 +0000 (0:00:02.727) 0:00:17.692 ********* 2025-06-05 19:49:17.518590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-05 19:49:17.518597 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-05 19:49:17.518604 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-05 19:49:17.518611 | orchestrator | 2025-06-05 19:49:17.518618 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-05 19:49:17.518625 | orchestrator | Thursday 05 June 2025 19:47:48 +0000 (0:00:01.594) 0:00:19.287 ********* 2025-06-05 19:49:17.518631 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518638 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518645 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518651 | orchestrator | 2025-06-05 19:49:17.518657 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-05 19:49:17.518663 | orchestrator | Thursday 05 June 2025 19:47:48 +0000 (0:00:00.275) 0:00:19.562 ********* 2025-06-05 19:49:17.518670 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518677 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518684 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518691 | orchestrator | 2025-06-05 19:49:17.518697 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-05 19:49:17.518704 | orchestrator | Thursday 05 June 2025 19:47:49 +0000 (0:00:00.288) 0:00:19.850 ********* 2025-06-05 19:49:17.518711 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:49:17.518718 | orchestrator | 2025-06-05 19:49:17.518724 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-05 19:49:17.518730 | orchestrator | Thursday 05 June 2025 19:47:49 +0000 (0:00:00.712) 0:00:20.563 ********* 2025-06-05 19:49:17.518743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.518766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.518779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.518790 | orchestrator | 2025-06-05 19:49:17.518798 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-05 19:49:17.518805 | orchestrator | Thursday 05 June 2025 19:47:51 +0000 (0:00:01.395) 0:00:21.958 ********* 2025-06-05 19:49:17.518818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518826 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518857 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518872 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518879 | orchestrator | 2025-06-05 19:49:17.518886 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-05 19:49:17.518893 | orchestrator | Thursday 05 June 2025 19:47:51 +0000 (0:00:00.576) 0:00:22.535 ********* 2025-06-05 19:49:17.518909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518921 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.518929 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518958 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.518974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-05 19:49:17.518982 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.518989 | orchestrator | 2025-06-05 19:49:17.518996 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-05 19:49:17.519003 | orchestrator | Thursday 05 June 2025 19:47:52 +0000 (0:00:01.205) 0:00:23.740 ********* 2025-06-05 19:49:17.519015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.519033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.519048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-05 19:49:17.519061 | orchestrator | 2025-06-05 19:49:17.519069 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-05 19:49:17.519076 | orchestrator | Thursday 05 June 2025 19:47:54 +0000 (0:00:01.147) 0:00:24.888 ********* 2025-06-05 19:49:17.519083 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:49:17.519090 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:49:17.519097 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:49:17.519103 | orchestrator | 2025-06-05 19:49:17.519110 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-05 19:49:17.519120 | orchestrator | Thursday 05 June 2025 19:47:54 +0000 (0:00:00.286) 0:00:25.175 ********* 2025-06-05 19:49:17.519127 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:49:17.519134 | orchestrator | 2025-06-05 19:49:17.519141 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-05 19:49:17.519148 | orchestrator | Thursday 05 June 2025 19:47:55 +0000 (0:00:00.751) 0:00:25.926 ********* 2025-06-05 19:49:17.519155 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:49:17.519162 | orchestrator | 2025-06-05 19:49:17.519168 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-05 19:49:17.519175 | orchestrator | Thursday 05 June 2025 19:47:57 +0000 (0:00:02.404) 0:00:28.331 ********* 2025-06-05 19:49:17.519182 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:49:17.519189 | orchestrator | 2025-06-05 19:49:17.519197 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-05 19:49:17.519204 | orchestrator | Thursday 05 June 2025 19:47:59 +0000 (0:00:02.263) 0:00:30.594 ********* 2025-06-05 19:49:17.519211 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:49:17.519218 | orchestrator | 2025-06-05 19:49:17.519225 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-05 19:49:17.519232 | orchestrator | Thursday 05 June 2025 19:48:15 +0000 (0:00:16.207) 0:00:46.801 ********* 2025-06-05 19:49:17.519239 | orchestrator | 2025-06-05 19:49:17.519246 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-05 19:49:17.519254 | orchestrator | Thursday 05 June 2025 19:48:16 +0000 (0:00:00.065) 0:00:46.867 ********* 2025-06-05 19:49:17.519261 | orchestrator | 2025-06-05 19:49:17.519268 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-05 19:49:17.519275 | orchestrator | Thursday 05 June 2025 19:48:16 +0000 (0:00:00.064) 0:00:46.932 ********* 2025-06-05 19:49:17.519282 | orchestrator | 2025-06-05 19:49:17.519289 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-05 19:49:17.519296 | orchestrator | Thursday 05 June 2025 19:48:16 +0000 (0:00:00.065) 0:00:46.998 ********* 2025-06-05 19:49:17.519304 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:49:17.519310 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:49:17.519317 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:49:17.519325 | orchestrator | 2025-06-05 19:49:17.519340 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:49:17.519348 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-05 19:49:17.519356 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-05 19:49:17.519363 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-05 19:49:17.519370 | orchestrator | 2025-06-05 19:49:17.519377 | orchestrator | 2025-06-05 19:49:17.519384 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:49:17.519392 | orchestrator | Thursday 05 June 2025 19:49:15 +0000 (0:00:59.623) 0:01:46.622 ********* 2025-06-05 19:49:17.519399 | orchestrator | =============================================================================== 2025-06-05 19:49:17.519406 | orchestrator | horizon : Restart horizon container ------------------------------------ 59.62s 2025-06-05 19:49:17.519413 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.21s 2025-06-05 19:49:17.519420 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.73s 2025-06-05 19:49:17.519427 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.40s 2025-06-05 19:49:17.519435 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.26s 2025-06-05 19:49:17.519442 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.75s 2025-06-05 19:49:17.519449 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.59s 2025-06-05 19:49:17.519456 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.56s 2025-06-05 19:49:17.519463 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.40s 2025-06-05 19:49:17.519471 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.21s 2025-06-05 19:49:17.519477 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.15s 2025-06-05 19:49:17.519501 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.01s 2025-06-05 19:49:17.519509 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-06-05 19:49:17.519516 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-06-05 19:49:17.519523 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-06-05 19:49:17.519530 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.58s 2025-06-05 19:49:17.519537 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-06-05 19:49:17.519544 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-06-05 19:49:17.519551 | orchestrator | horizon : Update policy file name --------------------------------------- 0.46s 2025-06-05 19:49:17.519558 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.46s 2025-06-05 19:49:17.519565 | orchestrator | 2025-06-05 19:49:17 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:17.520634 | orchestrator | 2025-06-05 19:49:17 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:17.520662 | orchestrator | 2025-06-05 19:49:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:20.569763 | orchestrator | 2025-06-05 19:49:20 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:20.570912 | orchestrator | 2025-06-05 19:49:20 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:20.570953 | orchestrator | 2025-06-05 19:49:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:23.627276 | orchestrator | 2025-06-05 19:49:23 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:23.629173 | orchestrator | 2025-06-05 19:49:23 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:23.629229 | orchestrator | 2025-06-05 19:49:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:26.681022 | orchestrator | 2025-06-05 19:49:26 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:26.682558 | orchestrator | 2025-06-05 19:49:26 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:26.682592 | orchestrator | 2025-06-05 19:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:29.727730 | orchestrator | 2025-06-05 19:49:29 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:29.729201 | orchestrator | 2025-06-05 19:49:29 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:29.729233 | orchestrator | 2025-06-05 19:49:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:32.789478 | orchestrator | 2025-06-05 19:49:32 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:32.791732 | orchestrator | 2025-06-05 19:49:32 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:32.794334 | orchestrator | 2025-06-05 19:49:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:35.842827 | orchestrator | 2025-06-05 19:49:35 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state STARTED 2025-06-05 19:49:35.845088 | orchestrator | 2025-06-05 19:49:35 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:35.845241 | orchestrator | 2025-06-05 19:49:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:38.899051 | orchestrator | 2025-06-05 19:49:38 | INFO  | Task 5730f1a1-d06e-4b95-8099-5a5d42e7025f is in state SUCCESS 2025-06-05 19:49:38.900922 | orchestrator | 2025-06-05 19:49:38 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:38.901388 | orchestrator | 2025-06-05 19:49:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:41.959517 | orchestrator | 2025-06-05 19:49:41 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:41.961243 | orchestrator | 2025-06-05 19:49:41 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:41.961275 | orchestrator | 2025-06-05 19:49:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:45.009747 | orchestrator | 2025-06-05 19:49:45 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:45.009860 | orchestrator | 2025-06-05 19:49:45 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:45.009876 | orchestrator | 2025-06-05 19:49:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:48.064843 | orchestrator | 2025-06-05 19:49:48 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:48.066176 | orchestrator | 2025-06-05 19:49:48 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:48.066210 | orchestrator | 2025-06-05 19:49:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:51.106422 | orchestrator | 2025-06-05 19:49:51 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:51.107756 | orchestrator | 2025-06-05 19:49:51 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:51.108440 | orchestrator | 2025-06-05 19:49:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:54.145659 | orchestrator | 2025-06-05 19:49:54 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:54.146477 | orchestrator | 2025-06-05 19:49:54 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:54.146523 | orchestrator | 2025-06-05 19:49:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:49:57.186233 | orchestrator | 2025-06-05 19:49:57 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:49:57.187871 | orchestrator | 2025-06-05 19:49:57 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:49:57.188085 | orchestrator | 2025-06-05 19:49:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:00.229821 | orchestrator | 2025-06-05 19:50:00 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:00.230745 | orchestrator | 2025-06-05 19:50:00 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:00.230780 | orchestrator | 2025-06-05 19:50:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:03.268817 | orchestrator | 2025-06-05 19:50:03 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:03.270218 | orchestrator | 2025-06-05 19:50:03 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:03.270257 | orchestrator | 2025-06-05 19:50:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:06.307226 | orchestrator | 2025-06-05 19:50:06 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:06.308227 | orchestrator | 2025-06-05 19:50:06 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:06.308258 | orchestrator | 2025-06-05 19:50:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:09.343874 | orchestrator | 2025-06-05 19:50:09 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:09.345484 | orchestrator | 2025-06-05 19:50:09 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:09.345529 | orchestrator | 2025-06-05 19:50:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:12.385912 | orchestrator | 2025-06-05 19:50:12 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:12.387604 | orchestrator | 2025-06-05 19:50:12 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:12.387640 | orchestrator | 2025-06-05 19:50:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:15.424922 | orchestrator | 2025-06-05 19:50:15 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:15.425894 | orchestrator | 2025-06-05 19:50:15 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:15.425929 | orchestrator | 2025-06-05 19:50:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:18.466314 | orchestrator | 2025-06-05 19:50:18 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:18.468427 | orchestrator | 2025-06-05 19:50:18 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:18.468478 | orchestrator | 2025-06-05 19:50:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:21.510395 | orchestrator | 2025-06-05 19:50:21 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state STARTED 2025-06-05 19:50:21.511941 | orchestrator | 2025-06-05 19:50:21 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:21.512392 | orchestrator | 2025-06-05 19:50:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:24.540066 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:24.540864 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:24.543341 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 2a2c5bfc-a78e-4ebb-a0a6-fe3d4b25a96c is in state SUCCESS 2025-06-05 19:50:24.545273 | orchestrator | 2025-06-05 19:50:24.545378 | orchestrator | 2025-06-05 19:50:24.545396 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-05 19:50:24.545875 | orchestrator | 2025-06-05 19:50:24.545898 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-05 19:50:24.545910 | orchestrator | Thursday 05 June 2025 19:49:13 +0000 (0:00:00.172) 0:00:00.172 ********* 2025-06-05 19:50:24.545922 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-05 19:50:24.545935 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.545946 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.545956 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:50:24.545967 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.545978 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-05 19:50:24.545989 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-05 19:50:24.546000 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-05 19:50:24.546010 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-05 19:50:24.546251 | orchestrator | 2025-06-05 19:50:24.546267 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-05 19:50:24.546279 | orchestrator | Thursday 05 June 2025 19:49:17 +0000 (0:00:04.446) 0:00:04.619 ********* 2025-06-05 19:50:24.546291 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-05 19:50:24.546302 | orchestrator | 2025-06-05 19:50:24.546313 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-05 19:50:24.546324 | orchestrator | Thursday 05 June 2025 19:49:18 +0000 (0:00:00.937) 0:00:05.556 ********* 2025-06-05 19:50:24.546336 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-05 19:50:24.546347 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546358 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546369 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:50:24.546379 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546390 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-05 19:50:24.546401 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-05 19:50:24.546412 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-05 19:50:24.546423 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-05 19:50:24.546434 | orchestrator | 2025-06-05 19:50:24.546445 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-05 19:50:24.546526 | orchestrator | Thursday 05 June 2025 19:49:31 +0000 (0:00:12.745) 0:00:18.302 ********* 2025-06-05 19:50:24.546538 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-05 19:50:24.546549 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546560 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546571 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:50:24.546582 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-05 19:50:24.546593 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-05 19:50:24.546604 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-05 19:50:24.546614 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-05 19:50:24.546625 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-05 19:50:24.546636 | orchestrator | 2025-06-05 19:50:24.546647 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:50:24.546658 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:50:24.546670 | orchestrator | 2025-06-05 19:50:24.546766 | orchestrator | 2025-06-05 19:50:24.546780 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:50:24.546888 | orchestrator | Thursday 05 June 2025 19:49:37 +0000 (0:00:06.237) 0:00:24.539 ********* 2025-06-05 19:50:24.546904 | orchestrator | =============================================================================== 2025-06-05 19:50:24.546915 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.75s 2025-06-05 19:50:24.546926 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.24s 2025-06-05 19:50:24.546949 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.45s 2025-06-05 19:50:24.546961 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-06-05 19:50:24.546972 | orchestrator | 2025-06-05 19:50:24.546983 | orchestrator | 2025-06-05 19:50:24.546994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:50:24.547005 | orchestrator | 2025-06-05 19:50:24.547061 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:50:24.547075 | orchestrator | Thursday 05 June 2025 19:47:29 +0000 (0:00:00.246) 0:00:00.246 ********* 2025-06-05 19:50:24.547087 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.547099 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.547110 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.547120 | orchestrator | 2025-06-05 19:50:24.547132 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:50:24.547143 | orchestrator | Thursday 05 June 2025 19:47:29 +0000 (0:00:00.268) 0:00:00.514 ********* 2025-06-05 19:50:24.547153 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-05 19:50:24.547165 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-05 19:50:24.547176 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-05 19:50:24.547186 | orchestrator | 2025-06-05 19:50:24.547197 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-05 19:50:24.547208 | orchestrator | 2025-06-05 19:50:24.547219 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.547230 | orchestrator | Thursday 05 June 2025 19:47:30 +0000 (0:00:00.389) 0:00:00.904 ********* 2025-06-05 19:50:24.547240 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:50:24.547251 | orchestrator | 2025-06-05 19:50:24.547262 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-05 19:50:24.547273 | orchestrator | Thursday 05 June 2025 19:47:30 +0000 (0:00:00.525) 0:00:01.430 ********* 2025-06-05 19:50:24.547302 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547320 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547466 | orchestrator | 2025-06-05 19:50:24.547482 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-05 19:50:24.547494 | orchestrator | Thursday 05 June 2025 19:47:32 +0000 (0:00:01.764) 0:00:03.194 ********* 2025-06-05 19:50:24.547511 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-05 19:50:24.547524 | orchestrator | 2025-06-05 19:50:24.547537 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-05 19:50:24.547550 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.865) 0:00:04.060 ********* 2025-06-05 19:50:24.547563 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.547575 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.547587 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.547600 | orchestrator | 2025-06-05 19:50:24.547613 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-05 19:50:24.547625 | orchestrator | Thursday 05 June 2025 19:47:33 +0000 (0:00:00.426) 0:00:04.486 ********* 2025-06-05 19:50:24.547645 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:50:24.547658 | orchestrator | 2025-06-05 19:50:24.547670 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.547711 | orchestrator | Thursday 05 June 2025 19:47:34 +0000 (0:00:00.650) 0:00:05.137 ********* 2025-06-05 19:50:24.547724 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:50:24.547736 | orchestrator | 2025-06-05 19:50:24.547749 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-05 19:50:24.547761 | orchestrator | Thursday 05 June 2025 19:47:34 +0000 (0:00:00.501) 0:00:05.639 ********* 2025-06-05 19:50:24.547775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.547834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.547915 | orchestrator | 2025-06-05 19:50:24.547926 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-05 19:50:24.547942 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:03.622) 0:00:09.261 ********* 2025-06-05 19:50:24.547963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.547982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.547993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548005 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.548017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.548029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548070 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.548082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.548094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548117 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.548128 | orchestrator | 2025-06-05 19:50:24.548139 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-05 19:50:24.548150 | orchestrator | Thursday 05 June 2025 19:47:38 +0000 (0:00:00.510) 0:00:09.771 ********* 2025-06-05 19:50:24.548172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.548197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548221 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.548233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.548245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548274 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.548298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-05 19:50:24.548311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-05 19:50:24.548334 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.548345 | orchestrator | 2025-06-05 19:50:24.548357 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-05 19:50:24.548368 | orchestrator | Thursday 05 June 2025 19:47:39 +0000 (0:00:00.781) 0:00:10.552 ********* 2025-06-05 19:50:24.548380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548538 | orchestrator | 2025-06-05 19:50:24.548557 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-05 19:50:24.548574 | orchestrator | Thursday 05 June 2025 19:47:43 +0000 (0:00:03.751) 0:00:14.303 ********* 2025-06-05 19:50:24.548592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.548730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.548771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.548813 | orchestrator | 2025-06-05 19:50:24.548825 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-05 19:50:24.548836 | orchestrator | Thursday 05 June 2025 19:47:49 +0000 (0:00:05.566) 0:00:19.870 ********* 2025-06-05 19:50:24.548997 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:50:24.549012 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.549023 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:50:24.549034 | orchestrator | 2025-06-05 19:50:24.549045 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-05 19:50:24.549056 | orchestrator | Thursday 05 June 2025 19:47:50 +0000 (0:00:01.368) 0:00:21.238 ********* 2025-06-05 19:50:24.549067 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.549083 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.549095 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.549105 | orchestrator | 2025-06-05 19:50:24.549117 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-05 19:50:24.549214 | orchestrator | Thursday 05 June 2025 19:47:50 +0000 (0:00:00.488) 0:00:21.727 ********* 2025-06-05 19:50:24.549232 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.549252 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.549270 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.549289 | orchestrator | 2025-06-05 19:50:24.549307 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-05 19:50:24.549325 | orchestrator | Thursday 05 June 2025 19:47:51 +0000 (0:00:00.518) 0:00:22.245 ********* 2025-06-05 19:50:24.549344 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.549364 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.549385 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.549404 | orchestrator | 2025-06-05 19:50:24.549423 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-05 19:50:24.549439 | orchestrator | Thursday 05 June 2025 19:47:51 +0000 (0:00:00.275) 0:00:22.520 ********* 2025-06-05 19:50:24.549452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.549465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.549488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.549507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.549529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.549542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-05 19:50:24.549554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.549573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.549585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.549596 | orchestrator | 2025-06-05 19:50:24.549607 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.549618 | orchestrator | Thursday 05 June 2025 19:47:54 +0000 (0:00:02.568) 0:00:25.089 ********* 2025-06-05 19:50:24.549629 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.549640 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.549651 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.549662 | orchestrator | 2025-06-05 19:50:24.549673 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-05 19:50:24.549718 | orchestrator | Thursday 05 June 2025 19:47:54 +0000 (0:00:00.283) 0:00:25.372 ********* 2025-06-05 19:50:24.549733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-05 19:50:24.549754 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-05 19:50:24.549766 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-05 19:50:24.549777 | orchestrator | 2025-06-05 19:50:24.549795 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-05 19:50:24.549806 | orchestrator | Thursday 05 June 2025 19:47:56 +0000 (0:00:01.923) 0:00:27.295 ********* 2025-06-05 19:50:24.549818 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:50:24.549829 | orchestrator | 2025-06-05 19:50:24.549840 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-05 19:50:24.549853 | orchestrator | Thursday 05 June 2025 19:47:57 +0000 (0:00:00.883) 0:00:28.179 ********* 2025-06-05 19:50:24.549865 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.549877 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.549889 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.549902 | orchestrator | 2025-06-05 19:50:24.549914 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-05 19:50:24.549927 | orchestrator | Thursday 05 June 2025 19:47:57 +0000 (0:00:00.470) 0:00:28.650 ********* 2025-06-05 19:50:24.549940 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-05 19:50:24.549960 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:50:24.549973 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-05 19:50:24.549985 | orchestrator | 2025-06-05 19:50:24.549997 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-05 19:50:24.550009 | orchestrator | Thursday 05 June 2025 19:47:58 +0000 (0:00:00.954) 0:00:29.605 ********* 2025-06-05 19:50:24.550063 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.550076 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.550089 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.550102 | orchestrator | 2025-06-05 19:50:24.550114 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-05 19:50:24.550126 | orchestrator | Thursday 05 June 2025 19:47:59 +0000 (0:00:00.275) 0:00:29.880 ********* 2025-06-05 19:50:24.550137 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-05 19:50:24.550148 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-05 19:50:24.550159 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-05 19:50:24.550170 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-05 19:50:24.550181 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-05 19:50:24.550192 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-05 19:50:24.550203 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-05 19:50:24.550215 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-05 19:50:24.550225 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-05 19:50:24.550236 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-05 19:50:24.550248 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-05 19:50:24.550258 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-05 19:50:24.550270 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-05 19:50:24.550281 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-05 19:50:24.550292 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-05 19:50:24.550303 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 19:50:24.550314 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 19:50:24.550325 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 19:50:24.550336 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 19:50:24.550347 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 19:50:24.550357 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 19:50:24.550368 | orchestrator | 2025-06-05 19:50:24.550379 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-05 19:50:24.550390 | orchestrator | Thursday 05 June 2025 19:48:07 +0000 (0:00:08.716) 0:00:38.597 ********* 2025-06-05 19:50:24.550401 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 19:50:24.550412 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 19:50:24.550423 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 19:50:24.550442 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 19:50:24.550453 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 19:50:24.550464 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 19:50:24.550475 | orchestrator | 2025-06-05 19:50:24.550486 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-05 19:50:24.550504 | orchestrator | Thursday 05 June 2025 19:48:10 +0000 (0:00:02.533) 0:00:41.131 ********* 2025-06-05 19:50:24.550517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.550568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.550584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-05 19:50:24.550597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-05 19:50:24.550722 | orchestrator | 2025-06-05 19:50:24.550733 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.550745 | orchestrator | Thursday 05 June 2025 19:48:12 +0000 (0:00:02.237) 0:00:43.368 ********* 2025-06-05 19:50:24.550756 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.550767 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.550778 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.550796 | orchestrator | 2025-06-05 19:50:24.550807 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-05 19:50:24.550818 | orchestrator | Thursday 05 June 2025 19:48:12 +0000 (0:00:00.270) 0:00:43.639 ********* 2025-06-05 19:50:24.550829 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.550840 | orchestrator | 2025-06-05 19:50:24.550851 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-05 19:50:24.550862 | orchestrator | Thursday 05 June 2025 19:48:15 +0000 (0:00:02.405) 0:00:46.044 ********* 2025-06-05 19:50:24.550873 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.550884 | orchestrator | 2025-06-05 19:50:24.550895 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-05 19:50:24.550906 | orchestrator | Thursday 05 June 2025 19:48:17 +0000 (0:00:02.716) 0:00:48.761 ********* 2025-06-05 19:50:24.550917 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.550928 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.550939 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.550950 | orchestrator | 2025-06-05 19:50:24.550961 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-05 19:50:24.550977 | orchestrator | Thursday 05 June 2025 19:48:18 +0000 (0:00:00.910) 0:00:49.671 ********* 2025-06-05 19:50:24.550988 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.550999 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.551010 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.551021 | orchestrator | 2025-06-05 19:50:24.551038 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-05 19:50:24.551050 | orchestrator | Thursday 05 June 2025 19:48:19 +0000 (0:00:00.295) 0:00:49.967 ********* 2025-06-05 19:50:24.551061 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.551072 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.551082 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.551093 | orchestrator | 2025-06-05 19:50:24.551104 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-05 19:50:24.551115 | orchestrator | Thursday 05 June 2025 19:48:19 +0000 (0:00:00.337) 0:00:50.304 ********* 2025-06-05 19:50:24.551126 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551137 | orchestrator | 2025-06-05 19:50:24.551148 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-05 19:50:24.551159 | orchestrator | Thursday 05 June 2025 19:48:34 +0000 (0:00:14.662) 0:01:04.967 ********* 2025-06-05 19:50:24.551170 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551181 | orchestrator | 2025-06-05 19:50:24.551192 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-05 19:50:24.551203 | orchestrator | Thursday 05 June 2025 19:48:45 +0000 (0:00:11.200) 0:01:16.167 ********* 2025-06-05 19:50:24.551214 | orchestrator | 2025-06-05 19:50:24.551224 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-05 19:50:24.551235 | orchestrator | Thursday 05 June 2025 19:48:45 +0000 (0:00:00.264) 0:01:16.432 ********* 2025-06-05 19:50:24.551246 | orchestrator | 2025-06-05 19:50:24.551257 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-05 19:50:24.551268 | orchestrator | Thursday 05 June 2025 19:48:45 +0000 (0:00:00.070) 0:01:16.502 ********* 2025-06-05 19:50:24.551278 | orchestrator | 2025-06-05 19:50:24.551289 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-05 19:50:24.551300 | orchestrator | Thursday 05 June 2025 19:48:45 +0000 (0:00:00.071) 0:01:16.573 ********* 2025-06-05 19:50:24.551311 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551322 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:50:24.551333 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:50:24.551344 | orchestrator | 2025-06-05 19:50:24.551355 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-05 19:50:24.551365 | orchestrator | Thursday 05 June 2025 19:49:11 +0000 (0:00:25.502) 0:01:42.076 ********* 2025-06-05 19:50:24.551383 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:50:24.551395 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551406 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:50:24.551417 | orchestrator | 2025-06-05 19:50:24.551428 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-05 19:50:24.551439 | orchestrator | Thursday 05 June 2025 19:49:20 +0000 (0:00:09.606) 0:01:51.683 ********* 2025-06-05 19:50:24.551450 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551461 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:50:24.551472 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:50:24.551483 | orchestrator | 2025-06-05 19:50:24.551494 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.551505 | orchestrator | Thursday 05 June 2025 19:49:32 +0000 (0:00:11.401) 0:02:03.085 ********* 2025-06-05 19:50:24.551516 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:50:24.551527 | orchestrator | 2025-06-05 19:50:24.551538 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-05 19:50:24.551549 | orchestrator | Thursday 05 June 2025 19:49:32 +0000 (0:00:00.690) 0:02:03.775 ********* 2025-06-05 19:50:24.551560 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:50:24.551571 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.551582 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:50:24.551593 | orchestrator | 2025-06-05 19:50:24.551604 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-05 19:50:24.551615 | orchestrator | Thursday 05 June 2025 19:49:33 +0000 (0:00:00.699) 0:02:04.475 ********* 2025-06-05 19:50:24.551626 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:50:24.551637 | orchestrator | 2025-06-05 19:50:24.551648 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-05 19:50:24.551659 | orchestrator | Thursday 05 June 2025 19:49:35 +0000 (0:00:01.825) 0:02:06.300 ********* 2025-06-05 19:50:24.551670 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-05 19:50:24.551719 | orchestrator | 2025-06-05 19:50:24.551732 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-05 19:50:24.551743 | orchestrator | Thursday 05 June 2025 19:49:47 +0000 (0:00:11.761) 0:02:18.062 ********* 2025-06-05 19:50:24.551754 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-05 19:50:24.551765 | orchestrator | 2025-06-05 19:50:24.551776 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-05 19:50:24.551787 | orchestrator | Thursday 05 June 2025 19:50:10 +0000 (0:00:23.144) 0:02:41.206 ********* 2025-06-05 19:50:24.551798 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-05 19:50:24.551809 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-05 19:50:24.551820 | orchestrator | 2025-06-05 19:50:24.551831 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-05 19:50:24.551842 | orchestrator | Thursday 05 June 2025 19:50:17 +0000 (0:00:07.183) 0:02:48.389 ********* 2025-06-05 19:50:24.551853 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.551864 | orchestrator | 2025-06-05 19:50:24.551875 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-05 19:50:24.551886 | orchestrator | Thursday 05 June 2025 19:50:17 +0000 (0:00:00.281) 0:02:48.671 ********* 2025-06-05 19:50:24.551902 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.551913 | orchestrator | 2025-06-05 19:50:24.551924 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-05 19:50:24.551936 | orchestrator | Thursday 05 June 2025 19:50:17 +0000 (0:00:00.115) 0:02:48.787 ********* 2025-06-05 19:50:24.551947 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.551957 | orchestrator | 2025-06-05 19:50:24.551974 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-05 19:50:24.551986 | orchestrator | Thursday 05 June 2025 19:50:18 +0000 (0:00:00.132) 0:02:48.920 ********* 2025-06-05 19:50:24.552004 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.552015 | orchestrator | 2025-06-05 19:50:24.552026 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-05 19:50:24.552037 | orchestrator | Thursday 05 June 2025 19:50:18 +0000 (0:00:00.296) 0:02:49.216 ********* 2025-06-05 19:50:24.552048 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:50:24.552059 | orchestrator | 2025-06-05 19:50:24.552069 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-05 19:50:24.552080 | orchestrator | Thursday 05 June 2025 19:50:21 +0000 (0:00:03.226) 0:02:52.442 ********* 2025-06-05 19:50:24.552091 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:50:24.552102 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:50:24.552113 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:50:24.552123 | orchestrator | 2025-06-05 19:50:24.552134 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:50:24.552145 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-05 19:50:24.552157 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-05 19:50:24.552169 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-05 19:50:24.552179 | orchestrator | 2025-06-05 19:50:24.552190 | orchestrator | 2025-06-05 19:50:24.552201 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:50:24.552212 | orchestrator | Thursday 05 June 2025 19:50:22 +0000 (0:00:00.574) 0:02:53.017 ********* 2025-06-05 19:50:24.552223 | orchestrator | =============================================================================== 2025-06-05 19:50:24.552234 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 25.50s 2025-06-05 19:50:24.552244 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.14s 2025-06-05 19:50:24.552255 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.66s 2025-06-05 19:50:24.552266 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.76s 2025-06-05 19:50:24.552276 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.40s 2025-06-05 19:50:24.552288 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.20s 2025-06-05 19:50:24.552299 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.61s 2025-06-05 19:50:24.552310 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.72s 2025-06-05 19:50:24.552321 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.18s 2025-06-05 19:50:24.552331 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.57s 2025-06-05 19:50:24.552342 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.75s 2025-06-05 19:50:24.552353 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.62s 2025-06-05 19:50:24.552364 | orchestrator | keystone : Creating default user role ----------------------------------- 3.23s 2025-06-05 19:50:24.552375 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.72s 2025-06-05 19:50:24.552385 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.57s 2025-06-05 19:50:24.552396 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.53s 2025-06-05 19:50:24.552407 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.41s 2025-06-05 19:50:24.552417 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2025-06-05 19:50:24.552428 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.92s 2025-06-05 19:50:24.552445 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.83s 2025-06-05 19:50:24.552456 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:24.552467 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:24.552477 | orchestrator | 2025-06-05 19:50:24 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:24.552488 | orchestrator | 2025-06-05 19:50:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:27.589504 | orchestrator | 2025-06-05 19:50:27 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:27.589595 | orchestrator | 2025-06-05 19:50:27 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:27.589872 | orchestrator | 2025-06-05 19:50:27 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:27.590479 | orchestrator | 2025-06-05 19:50:27 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:27.592456 | orchestrator | 2025-06-05 19:50:27 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:27.592498 | orchestrator | 2025-06-05 19:50:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:30.620595 | orchestrator | 2025-06-05 19:50:30 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:30.620870 | orchestrator | 2025-06-05 19:50:30 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:30.621849 | orchestrator | 2025-06-05 19:50:30 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:30.622902 | orchestrator | 2025-06-05 19:50:30 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:30.624357 | orchestrator | 2025-06-05 19:50:30 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:30.624381 | orchestrator | 2025-06-05 19:50:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:33.657443 | orchestrator | 2025-06-05 19:50:33 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:33.658807 | orchestrator | 2025-06-05 19:50:33 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:33.660571 | orchestrator | 2025-06-05 19:50:33 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:33.662119 | orchestrator | 2025-06-05 19:50:33 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:33.663910 | orchestrator | 2025-06-05 19:50:33 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state STARTED 2025-06-05 19:50:33.663934 | orchestrator | 2025-06-05 19:50:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:36.715381 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:36.717076 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:36.719240 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:36.721163 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:36.722827 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:36.724951 | orchestrator | 2025-06-05 19:50:36 | INFO  | Task 058fb4f5-3179-4bd2-90ca-c81000d621d0 is in state SUCCESS 2025-06-05 19:50:36.725022 | orchestrator | 2025-06-05 19:50:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:39.780662 | orchestrator | 2025-06-05 19:50:39 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:39.784603 | orchestrator | 2025-06-05 19:50:39 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:39.786325 | orchestrator | 2025-06-05 19:50:39 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:39.787853 | orchestrator | 2025-06-05 19:50:39 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:39.789407 | orchestrator | 2025-06-05 19:50:39 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:39.789807 | orchestrator | 2025-06-05 19:50:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:42.828199 | orchestrator | 2025-06-05 19:50:42 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:42.830359 | orchestrator | 2025-06-05 19:50:42 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:42.832280 | orchestrator | 2025-06-05 19:50:42 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:42.834071 | orchestrator | 2025-06-05 19:50:42 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:42.835813 | orchestrator | 2025-06-05 19:50:42 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:42.835838 | orchestrator | 2025-06-05 19:50:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:45.880603 | orchestrator | 2025-06-05 19:50:45 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:45.881841 | orchestrator | 2025-06-05 19:50:45 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:45.883436 | orchestrator | 2025-06-05 19:50:45 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:45.885091 | orchestrator | 2025-06-05 19:50:45 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:45.887210 | orchestrator | 2025-06-05 19:50:45 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:45.887261 | orchestrator | 2025-06-05 19:50:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:48.931083 | orchestrator | 2025-06-05 19:50:48 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:48.932573 | orchestrator | 2025-06-05 19:50:48 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:48.934355 | orchestrator | 2025-06-05 19:50:48 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:48.935728 | orchestrator | 2025-06-05 19:50:48 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:48.937155 | orchestrator | 2025-06-05 19:50:48 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:48.937390 | orchestrator | 2025-06-05 19:50:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:51.986808 | orchestrator | 2025-06-05 19:50:51 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:51.988848 | orchestrator | 2025-06-05 19:50:51 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:51.992275 | orchestrator | 2025-06-05 19:50:51 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:51.995200 | orchestrator | 2025-06-05 19:50:51 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:51.997018 | orchestrator | 2025-06-05 19:50:51 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:51.997057 | orchestrator | 2025-06-05 19:50:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:55.045523 | orchestrator | 2025-06-05 19:50:55 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:55.047022 | orchestrator | 2025-06-05 19:50:55 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:55.049024 | orchestrator | 2025-06-05 19:50:55 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:55.051210 | orchestrator | 2025-06-05 19:50:55 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:55.052444 | orchestrator | 2025-06-05 19:50:55 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:55.052556 | orchestrator | 2025-06-05 19:50:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:50:58.091497 | orchestrator | 2025-06-05 19:50:58 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:50:58.092629 | orchestrator | 2025-06-05 19:50:58 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:50:58.094319 | orchestrator | 2025-06-05 19:50:58 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:50:58.095878 | orchestrator | 2025-06-05 19:50:58 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:50:58.098402 | orchestrator | 2025-06-05 19:50:58 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:50:58.098442 | orchestrator | 2025-06-05 19:50:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:01.143728 | orchestrator | 2025-06-05 19:51:01 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:01.144778 | orchestrator | 2025-06-05 19:51:01 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:01.148399 | orchestrator | 2025-06-05 19:51:01 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:01.149510 | orchestrator | 2025-06-05 19:51:01 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state STARTED 2025-06-05 19:51:01.150181 | orchestrator | 2025-06-05 19:51:01 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:01.150576 | orchestrator | 2025-06-05 19:51:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:04.194913 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:04.195510 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:04.197180 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:04.197730 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 1b29105e-ad59-47c4-97e5-7acdbe0b859a is in state SUCCESS 2025-06-05 19:51:04.198237 | orchestrator | 2025-06-05 19:51:04.198269 | orchestrator | 2025-06-05 19:51:04.198281 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-05 19:51:04.198293 | orchestrator | 2025-06-05 19:51:04.198304 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-05 19:51:04.198316 | orchestrator | Thursday 05 June 2025 19:49:41 +0000 (0:00:00.226) 0:00:00.226 ********* 2025-06-05 19:51:04.198353 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-05 19:51:04.198367 | orchestrator | 2025-06-05 19:51:04.198378 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-05 19:51:04.198389 | orchestrator | Thursday 05 June 2025 19:49:42 +0000 (0:00:00.226) 0:00:00.453 ********* 2025-06-05 19:51:04.198401 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-05 19:51:04.198412 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-05 19:51:04.198424 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-05 19:51:04.198435 | orchestrator | 2025-06-05 19:51:04.198446 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-05 19:51:04.198457 | orchestrator | Thursday 05 June 2025 19:49:43 +0000 (0:00:01.181) 0:00:01.635 ********* 2025-06-05 19:51:04.198468 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-05 19:51:04.198479 | orchestrator | 2025-06-05 19:51:04.198490 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-05 19:51:04.198501 | orchestrator | Thursday 05 June 2025 19:49:44 +0000 (0:00:01.081) 0:00:02.716 ********* 2025-06-05 19:51:04.198512 | orchestrator | changed: [testbed-manager] 2025-06-05 19:51:04.198523 | orchestrator | 2025-06-05 19:51:04.198534 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-05 19:51:04.198545 | orchestrator | Thursday 05 June 2025 19:49:45 +0000 (0:00:01.127) 0:00:03.843 ********* 2025-06-05 19:51:04.198556 | orchestrator | changed: [testbed-manager] 2025-06-05 19:51:04.198566 | orchestrator | 2025-06-05 19:51:04.198577 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-05 19:51:04.198588 | orchestrator | Thursday 05 June 2025 19:49:46 +0000 (0:00:00.923) 0:00:04.767 ********* 2025-06-05 19:51:04.198599 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-05 19:51:04.198610 | orchestrator | ok: [testbed-manager] 2025-06-05 19:51:04.198621 | orchestrator | 2025-06-05 19:51:04.198632 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-05 19:51:04.198642 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:40.740) 0:00:45.507 ********* 2025-06-05 19:51:04.198653 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-05 19:51:04.198665 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-05 19:51:04.198675 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-05 19:51:04.198686 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-05 19:51:04.198697 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-05 19:51:04.198708 | orchestrator | 2025-06-05 19:51:04.198719 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-05 19:51:04.198730 | orchestrator | Thursday 05 June 2025 19:50:30 +0000 (0:00:03.061) 0:00:48.569 ********* 2025-06-05 19:51:04.198741 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-05 19:51:04.198751 | orchestrator | 2025-06-05 19:51:04.198762 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-05 19:51:04.198773 | orchestrator | Thursday 05 June 2025 19:50:30 +0000 (0:00:00.322) 0:00:48.891 ********* 2025-06-05 19:51:04.198784 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:51:04.198823 | orchestrator | 2025-06-05 19:51:04.198837 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-05 19:51:04.198850 | orchestrator | Thursday 05 June 2025 19:50:30 +0000 (0:00:00.097) 0:00:48.989 ********* 2025-06-05 19:51:04.198862 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:51:04.198874 | orchestrator | 2025-06-05 19:51:04.198887 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-05 19:51:04.198900 | orchestrator | Thursday 05 June 2025 19:50:30 +0000 (0:00:00.216) 0:00:49.205 ********* 2025-06-05 19:51:04.198913 | orchestrator | changed: [testbed-manager] 2025-06-05 19:51:04.198932 | orchestrator | 2025-06-05 19:51:04.198945 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-05 19:51:04.198959 | orchestrator | Thursday 05 June 2025 19:50:32 +0000 (0:00:01.445) 0:00:50.651 ********* 2025-06-05 19:51:04.198971 | orchestrator | changed: [testbed-manager] 2025-06-05 19:51:04.198983 | orchestrator | 2025-06-05 19:51:04.198996 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-05 19:51:04.199008 | orchestrator | Thursday 05 June 2025 19:50:32 +0000 (0:00:00.609) 0:00:51.260 ********* 2025-06-05 19:51:04.199021 | orchestrator | changed: [testbed-manager] 2025-06-05 19:51:04.199033 | orchestrator | 2025-06-05 19:51:04.199046 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-05 19:51:04.199058 | orchestrator | Thursday 05 June 2025 19:50:33 +0000 (0:00:00.494) 0:00:51.755 ********* 2025-06-05 19:51:04.199070 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-05 19:51:04.199094 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-05 19:51:04.199107 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-05 19:51:04.199120 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-05 19:51:04.199133 | orchestrator | 2025-06-05 19:51:04.199146 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:51:04.199159 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 19:51:04.199172 | orchestrator | 2025-06-05 19:51:04.199185 | orchestrator | 2025-06-05 19:51:04.199209 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:51:04.199221 | orchestrator | Thursday 05 June 2025 19:50:34 +0000 (0:00:01.295) 0:00:53.051 ********* 2025-06-05 19:51:04.199232 | orchestrator | =============================================================================== 2025-06-05 19:51:04.199243 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.74s 2025-06-05 19:51:04.199254 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.06s 2025-06-05 19:51:04.199265 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.45s 2025-06-05 19:51:04.199275 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.30s 2025-06-05 19:51:04.199286 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2025-06-05 19:51:04.199297 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.13s 2025-06-05 19:51:04.199308 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.08s 2025-06-05 19:51:04.199318 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.92s 2025-06-05 19:51:04.199329 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.61s 2025-06-05 19:51:04.199340 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.49s 2025-06-05 19:51:04.199350 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.32s 2025-06-05 19:51:04.199452 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2025-06-05 19:51:04.199466 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.22s 2025-06-05 19:51:04.199477 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.10s 2025-06-05 19:51:04.199488 | orchestrator | 2025-06-05 19:51:04.199499 | orchestrator | 2025-06-05 19:51:04.199511 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-05 19:51:04.199530 | orchestrator | 2025-06-05 19:51:04.199549 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-05 19:51:04.199565 | orchestrator | Thursday 05 June 2025 19:50:26 +0000 (0:00:00.160) 0:00:00.160 ********* 2025-06-05 19:51:04.199583 | orchestrator | changed: [localhost] 2025-06-05 19:51:04.199602 | orchestrator | 2025-06-05 19:51:04.199621 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-05 19:51:04.199650 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.999) 0:00:01.160 ********* 2025-06-05 19:51:04.199668 | orchestrator | changed: [localhost] 2025-06-05 19:51:04.199686 | orchestrator | 2025-06-05 19:51:04.199705 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-05 19:51:04.199723 | orchestrator | Thursday 05 June 2025 19:50:57 +0000 (0:00:29.367) 0:00:30.527 ********* 2025-06-05 19:51:04.199742 | orchestrator | changed: [localhost] 2025-06-05 19:51:04.199760 | orchestrator | 2025-06-05 19:51:04.199779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:51:04.199888 | orchestrator | 2025-06-05 19:51:04.199912 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:51:04.199929 | orchestrator | Thursday 05 June 2025 19:51:01 +0000 (0:00:03.933) 0:00:34.461 ********* 2025-06-05 19:51:04.199944 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:51:04.199955 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:51:04.199966 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:51:04.199977 | orchestrator | 2025-06-05 19:51:04.199988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:51:04.199999 | orchestrator | Thursday 05 June 2025 19:51:01 +0000 (0:00:00.332) 0:00:34.794 ********* 2025-06-05 19:51:04.200010 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-05 19:51:04.200021 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-05 19:51:04.200032 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-05 19:51:04.200042 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-05 19:51:04.200053 | orchestrator | 2025-06-05 19:51:04.200064 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-05 19:51:04.200075 | orchestrator | skipping: no hosts matched 2025-06-05 19:51:04.200088 | orchestrator | 2025-06-05 19:51:04.200101 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:51:04.200113 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:51:04.200127 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:51:04.200141 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:51:04.200154 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:51:04.200167 | orchestrator | 2025-06-05 19:51:04.200179 | orchestrator | 2025-06-05 19:51:04.200192 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:51:04.200213 | orchestrator | Thursday 05 June 2025 19:51:01 +0000 (0:00:00.391) 0:00:35.186 ********* 2025-06-05 19:51:04.200226 | orchestrator | =============================================================================== 2025-06-05 19:51:04.200239 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.37s 2025-06-05 19:51:04.200252 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.93s 2025-06-05 19:51:04.200265 | orchestrator | Ensure the destination directory exists --------------------------------- 1.00s 2025-06-05 19:51:04.200277 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.39s 2025-06-05 19:51:04.200302 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-05 19:51:04.200481 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:04.200502 | orchestrator | 2025-06-05 19:51:04 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:04.200513 | orchestrator | 2025-06-05 19:51:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:07.232935 | orchestrator | 2025-06-05 19:51:07 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:07.233225 | orchestrator | 2025-06-05 19:51:07 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:07.234281 | orchestrator | 2025-06-05 19:51:07 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:07.235256 | orchestrator | 2025-06-05 19:51:07 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:07.238187 | orchestrator | 2025-06-05 19:51:07 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:07.239545 | orchestrator | 2025-06-05 19:51:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:10.263022 | orchestrator | 2025-06-05 19:51:10 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:10.263278 | orchestrator | 2025-06-05 19:51:10 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:10.264201 | orchestrator | 2025-06-05 19:51:10 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:10.264855 | orchestrator | 2025-06-05 19:51:10 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:10.265847 | orchestrator | 2025-06-05 19:51:10 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:10.265873 | orchestrator | 2025-06-05 19:51:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:13.297209 | orchestrator | 2025-06-05 19:51:13 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:13.297517 | orchestrator | 2025-06-05 19:51:13 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:13.299101 | orchestrator | 2025-06-05 19:51:13 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:13.300098 | orchestrator | 2025-06-05 19:51:13 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:13.301432 | orchestrator | 2025-06-05 19:51:13 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:13.301509 | orchestrator | 2025-06-05 19:51:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:16.342392 | orchestrator | 2025-06-05 19:51:16 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:16.342798 | orchestrator | 2025-06-05 19:51:16 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:16.344955 | orchestrator | 2025-06-05 19:51:16 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:16.345436 | orchestrator | 2025-06-05 19:51:16 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:16.346205 | orchestrator | 2025-06-05 19:51:16 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:16.347909 | orchestrator | 2025-06-05 19:51:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:19.369749 | orchestrator | 2025-06-05 19:51:19 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:19.369875 | orchestrator | 2025-06-05 19:51:19 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:19.371024 | orchestrator | 2025-06-05 19:51:19 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:19.371717 | orchestrator | 2025-06-05 19:51:19 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:19.372336 | orchestrator | 2025-06-05 19:51:19 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:19.372364 | orchestrator | 2025-06-05 19:51:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:22.401654 | orchestrator | 2025-06-05 19:51:22 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:22.401757 | orchestrator | 2025-06-05 19:51:22 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:22.402375 | orchestrator | 2025-06-05 19:51:22 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:22.402902 | orchestrator | 2025-06-05 19:51:22 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:22.404646 | orchestrator | 2025-06-05 19:51:22 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:22.404677 | orchestrator | 2025-06-05 19:51:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:25.430313 | orchestrator | 2025-06-05 19:51:25 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:25.430461 | orchestrator | 2025-06-05 19:51:25 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:25.431198 | orchestrator | 2025-06-05 19:51:25 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:25.431725 | orchestrator | 2025-06-05 19:51:25 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:25.432461 | orchestrator | 2025-06-05 19:51:25 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:25.432477 | orchestrator | 2025-06-05 19:51:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:28.455427 | orchestrator | 2025-06-05 19:51:28 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:28.455613 | orchestrator | 2025-06-05 19:51:28 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:28.456202 | orchestrator | 2025-06-05 19:51:28 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:28.456721 | orchestrator | 2025-06-05 19:51:28 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:28.457416 | orchestrator | 2025-06-05 19:51:28 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:28.457440 | orchestrator | 2025-06-05 19:51:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:31.487048 | orchestrator | 2025-06-05 19:51:31 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:31.487147 | orchestrator | 2025-06-05 19:51:31 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:31.487562 | orchestrator | 2025-06-05 19:51:31 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:31.488074 | orchestrator | 2025-06-05 19:51:31 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:31.488735 | orchestrator | 2025-06-05 19:51:31 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:31.488761 | orchestrator | 2025-06-05 19:51:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:34.521486 | orchestrator | 2025-06-05 19:51:34 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:34.521657 | orchestrator | 2025-06-05 19:51:34 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:34.522281 | orchestrator | 2025-06-05 19:51:34 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:34.523360 | orchestrator | 2025-06-05 19:51:34 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:34.524004 | orchestrator | 2025-06-05 19:51:34 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:34.524025 | orchestrator | 2025-06-05 19:51:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:37.548932 | orchestrator | 2025-06-05 19:51:37 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:37.549046 | orchestrator | 2025-06-05 19:51:37 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:37.549591 | orchestrator | 2025-06-05 19:51:37 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:37.550242 | orchestrator | 2025-06-05 19:51:37 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:37.550882 | orchestrator | 2025-06-05 19:51:37 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:37.550930 | orchestrator | 2025-06-05 19:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:40.573547 | orchestrator | 2025-06-05 19:51:40 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:40.573676 | orchestrator | 2025-06-05 19:51:40 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:40.574191 | orchestrator | 2025-06-05 19:51:40 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:40.575369 | orchestrator | 2025-06-05 19:51:40 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:40.575749 | orchestrator | 2025-06-05 19:51:40 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:40.575779 | orchestrator | 2025-06-05 19:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:43.610391 | orchestrator | 2025-06-05 19:51:43 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:43.610477 | orchestrator | 2025-06-05 19:51:43 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:43.611118 | orchestrator | 2025-06-05 19:51:43 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:43.611825 | orchestrator | 2025-06-05 19:51:43 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:43.612309 | orchestrator | 2025-06-05 19:51:43 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:43.612417 | orchestrator | 2025-06-05 19:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:46.644562 | orchestrator | 2025-06-05 19:51:46 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:46.644746 | orchestrator | 2025-06-05 19:51:46 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:46.645346 | orchestrator | 2025-06-05 19:51:46 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:46.646111 | orchestrator | 2025-06-05 19:51:46 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:46.646562 | orchestrator | 2025-06-05 19:51:46 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:46.646586 | orchestrator | 2025-06-05 19:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:49.667718 | orchestrator | 2025-06-05 19:51:49 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:49.667846 | orchestrator | 2025-06-05 19:51:49 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:49.668389 | orchestrator | 2025-06-05 19:51:49 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:49.672460 | orchestrator | 2025-06-05 19:51:49 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:49.673165 | orchestrator | 2025-06-05 19:51:49 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:49.673192 | orchestrator | 2025-06-05 19:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:52.705064 | orchestrator | 2025-06-05 19:51:52 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:52.705436 | orchestrator | 2025-06-05 19:51:52 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:52.705854 | orchestrator | 2025-06-05 19:51:52 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:52.706775 | orchestrator | 2025-06-05 19:51:52 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:52.707143 | orchestrator | 2025-06-05 19:51:52 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:52.707171 | orchestrator | 2025-06-05 19:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:55.728313 | orchestrator | 2025-06-05 19:51:55 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:55.729102 | orchestrator | 2025-06-05 19:51:55 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:55.729592 | orchestrator | 2025-06-05 19:51:55 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:55.730186 | orchestrator | 2025-06-05 19:51:55 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:55.730759 | orchestrator | 2025-06-05 19:51:55 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:55.730783 | orchestrator | 2025-06-05 19:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:51:58.755841 | orchestrator | 2025-06-05 19:51:58 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:51:58.757292 | orchestrator | 2025-06-05 19:51:58 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:51:58.758567 | orchestrator | 2025-06-05 19:51:58 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:51:58.759881 | orchestrator | 2025-06-05 19:51:58 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:51:58.761100 | orchestrator | 2025-06-05 19:51:58 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:51:58.761124 | orchestrator | 2025-06-05 19:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:01.786466 | orchestrator | 2025-06-05 19:52:01 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:01.787085 | orchestrator | 2025-06-05 19:52:01 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:01.787536 | orchestrator | 2025-06-05 19:52:01 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:52:01.789363 | orchestrator | 2025-06-05 19:52:01 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:01.790002 | orchestrator | 2025-06-05 19:52:01 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:01.790100 | orchestrator | 2025-06-05 19:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:04.812307 | orchestrator | 2025-06-05 19:52:04 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:04.812890 | orchestrator | 2025-06-05 19:52:04 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:04.813312 | orchestrator | 2025-06-05 19:52:04 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state STARTED 2025-06-05 19:52:04.814131 | orchestrator | 2025-06-05 19:52:04 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:04.820394 | orchestrator | 2025-06-05 19:52:04 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:04.820450 | orchestrator | 2025-06-05 19:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:07.844693 | orchestrator | 2025-06-05 19:52:07 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:07.845439 | orchestrator | 2025-06-05 19:52:07 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:07.846363 | orchestrator | 2025-06-05 19:52:07 | INFO  | Task 21d4ab68-cf8e-4a74-bc09-f2dc879dca6f is in state SUCCESS 2025-06-05 19:52:07.847194 | orchestrator | 2025-06-05 19:52:07 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:07.848392 | orchestrator | 2025-06-05 19:52:07 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:07.848426 | orchestrator | 2025-06-05 19:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:10.881336 | orchestrator | 2025-06-05 19:52:10 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:10.883788 | orchestrator | 2025-06-05 19:52:10 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:10.883821 | orchestrator | 2025-06-05 19:52:10 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:10.883832 | orchestrator | 2025-06-05 19:52:10 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:10.883844 | orchestrator | 2025-06-05 19:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:13.930362 | orchestrator | 2025-06-05 19:52:13 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:13.932660 | orchestrator | 2025-06-05 19:52:13 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:13.933874 | orchestrator | 2025-06-05 19:52:13 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:13.938007 | orchestrator | 2025-06-05 19:52:13 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:13.938308 | orchestrator | 2025-06-05 19:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:16.968189 | orchestrator | 2025-06-05 19:52:16 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:16.968982 | orchestrator | 2025-06-05 19:52:16 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:16.970528 | orchestrator | 2025-06-05 19:52:16 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:16.971135 | orchestrator | 2025-06-05 19:52:16 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:16.971158 | orchestrator | 2025-06-05 19:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:20.007513 | orchestrator | 2025-06-05 19:52:20 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:20.011553 | orchestrator | 2025-06-05 19:52:20 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:20.011589 | orchestrator | 2025-06-05 19:52:20 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:20.011602 | orchestrator | 2025-06-05 19:52:20 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:20.011614 | orchestrator | 2025-06-05 19:52:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:23.047070 | orchestrator | 2025-06-05 19:52:23 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:23.047317 | orchestrator | 2025-06-05 19:52:23 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:23.048071 | orchestrator | 2025-06-05 19:52:23 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:23.048788 | orchestrator | 2025-06-05 19:52:23 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:23.048811 | orchestrator | 2025-06-05 19:52:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:26.081968 | orchestrator | 2025-06-05 19:52:26 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:26.082548 | orchestrator | 2025-06-05 19:52:26 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:26.083963 | orchestrator | 2025-06-05 19:52:26 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state STARTED 2025-06-05 19:52:26.084707 | orchestrator | 2025-06-05 19:52:26 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:26.084898 | orchestrator | 2025-06-05 19:52:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:29.110638 | orchestrator | 2025-06-05 19:52:29 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:29.111327 | orchestrator | 2025-06-05 19:52:29 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:29.112799 | orchestrator | 2025-06-05 19:52:29 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:29.113866 | orchestrator | 2025-06-05 19:52:29 | INFO  | Task 13e0a691-6fad-442a-a17d-f02a6326c666 is in state SUCCESS 2025-06-05 19:52:29.114266 | orchestrator | 2025-06-05 19:52:29.114293 | orchestrator | 2025-06-05 19:52:29.114313 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-05 19:52:29.114479 | orchestrator | 2025-06-05 19:52:29.114492 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-05 19:52:29.114504 | orchestrator | Thursday 05 June 2025 19:50:39 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-06-05 19:52:29.114515 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114527 | orchestrator | 2025-06-05 19:52:29.114538 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-05 19:52:29.114549 | orchestrator | Thursday 05 June 2025 19:50:40 +0000 (0:00:01.458) 0:00:01.670 ********* 2025-06-05 19:52:29.114560 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114599 | orchestrator | 2025-06-05 19:52:29.114612 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-05 19:52:29.114623 | orchestrator | Thursday 05 June 2025 19:50:41 +0000 (0:00:00.928) 0:00:02.599 ********* 2025-06-05 19:52:29.114635 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114646 | orchestrator | 2025-06-05 19:52:29.114657 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-05 19:52:29.114667 | orchestrator | Thursday 05 June 2025 19:50:42 +0000 (0:00:00.959) 0:00:03.558 ********* 2025-06-05 19:52:29.114706 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114717 | orchestrator | 2025-06-05 19:52:29.114728 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-05 19:52:29.114739 | orchestrator | Thursday 05 June 2025 19:50:43 +0000 (0:00:01.064) 0:00:04.623 ********* 2025-06-05 19:52:29.114750 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114761 | orchestrator | 2025-06-05 19:52:29.114785 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-05 19:52:29.114840 | orchestrator | Thursday 05 June 2025 19:50:44 +0000 (0:00:01.004) 0:00:05.627 ********* 2025-06-05 19:52:29.114854 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114865 | orchestrator | 2025-06-05 19:52:29.114876 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-05 19:52:29.114889 | orchestrator | Thursday 05 June 2025 19:50:45 +0000 (0:00:01.042) 0:00:06.669 ********* 2025-06-05 19:52:29.114902 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.114915 | orchestrator | 2025-06-05 19:52:29.114927 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-05 19:52:29.114939 | orchestrator | Thursday 05 June 2025 19:50:47 +0000 (0:00:02.043) 0:00:08.713 ********* 2025-06-05 19:52:29.114988 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.115000 | orchestrator | 2025-06-05 19:52:29.115013 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-05 19:52:29.115063 | orchestrator | Thursday 05 June 2025 19:50:48 +0000 (0:00:01.072) 0:00:09.785 ********* 2025-06-05 19:52:29.115075 | orchestrator | changed: [testbed-manager] 2025-06-05 19:52:29.115086 | orchestrator | 2025-06-05 19:52:29.115097 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-05 19:52:29.115109 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:52.686) 0:01:02.471 ********* 2025-06-05 19:52:29.115120 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:52:29.115131 | orchestrator | 2025-06-05 19:52:29.115141 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-05 19:52:29.115152 | orchestrator | 2025-06-05 19:52:29.115163 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-05 19:52:29.115174 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:00.137) 0:01:02.609 ********* 2025-06-05 19:52:29.115185 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.115196 | orchestrator | 2025-06-05 19:52:29.115232 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-05 19:52:29.115244 | orchestrator | 2025-06-05 19:52:29.115255 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-05 19:52:29.115266 | orchestrator | Thursday 05 June 2025 19:51:53 +0000 (0:00:11.698) 0:01:14.307 ********* 2025-06-05 19:52:29.115277 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:29.115288 | orchestrator | 2025-06-05 19:52:29.115299 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-05 19:52:29.115309 | orchestrator | 2025-06-05 19:52:29.115320 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-05 19:52:29.115331 | orchestrator | Thursday 05 June 2025 19:51:54 +0000 (0:00:01.275) 0:01:15.583 ********* 2025-06-05 19:52:29.115349 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:29.115367 | orchestrator | 2025-06-05 19:52:29.115386 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:52:29.115405 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-05 19:52:29.115424 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:52:29.115443 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:52:29.115461 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:52:29.115495 | orchestrator | 2025-06-05 19:52:29.115515 | orchestrator | 2025-06-05 19:52:29.115527 | orchestrator | 2025-06-05 19:52:29.115538 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:52:29.115549 | orchestrator | Thursday 05 June 2025 19:52:05 +0000 (0:00:11.309) 0:01:26.892 ********* 2025-06-05 19:52:29.115560 | orchestrator | =============================================================================== 2025-06-05 19:52:29.115571 | orchestrator | Create admin user ------------------------------------------------------ 52.69s 2025-06-05 19:52:29.115582 | orchestrator | Restart ceph manager service ------------------------------------------- 24.28s 2025-06-05 19:52:29.115607 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2025-06-05 19:52:29.115619 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.46s 2025-06-05 19:52:29.115630 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.07s 2025-06-05 19:52:29.115641 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2025-06-05 19:52:29.115652 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.04s 2025-06-05 19:52:29.115663 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.00s 2025-06-05 19:52:29.115674 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.96s 2025-06-05 19:52:29.115685 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-06-05 19:52:29.115696 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-06-05 19:52:29.115707 | orchestrator | 2025-06-05 19:52:29.115943 | orchestrator | 2025-06-05 19:52:29.115958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:52:29.115969 | orchestrator | 2025-06-05 19:52:29.115979 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:52:29.115990 | orchestrator | Thursday 05 June 2025 19:51:06 +0000 (0:00:00.521) 0:00:00.521 ********* 2025-06-05 19:52:29.116001 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:52:29.116012 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:52:29.116023 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:52:29.116060 | orchestrator | 2025-06-05 19:52:29.116071 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:52:29.116089 | orchestrator | Thursday 05 June 2025 19:51:07 +0000 (0:00:00.336) 0:00:00.858 ********* 2025-06-05 19:52:29.116100 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-05 19:52:29.116111 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-05 19:52:29.116122 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-05 19:52:29.116133 | orchestrator | 2025-06-05 19:52:29.116143 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-05 19:52:29.116154 | orchestrator | 2025-06-05 19:52:29.116165 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-05 19:52:29.116176 | orchestrator | Thursday 05 June 2025 19:51:07 +0000 (0:00:00.417) 0:00:01.275 ********* 2025-06-05 19:52:29.116187 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:52:29.116199 | orchestrator | 2025-06-05 19:52:29.116209 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-05 19:52:29.116220 | orchestrator | Thursday 05 June 2025 19:51:08 +0000 (0:00:00.539) 0:00:01.814 ********* 2025-06-05 19:52:29.116231 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-05 19:52:29.116242 | orchestrator | 2025-06-05 19:52:29.116253 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-05 19:52:29.116264 | orchestrator | Thursday 05 June 2025 19:51:11 +0000 (0:00:03.115) 0:00:04.930 ********* 2025-06-05 19:52:29.116274 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-05 19:52:29.116295 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-05 19:52:29.116306 | orchestrator | 2025-06-05 19:52:29.116317 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-05 19:52:29.116328 | orchestrator | Thursday 05 June 2025 19:51:18 +0000 (0:00:06.966) 0:00:11.897 ********* 2025-06-05 19:52:29.116338 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:52:29.116349 | orchestrator | 2025-06-05 19:52:29.116360 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-05 19:52:29.116370 | orchestrator | Thursday 05 June 2025 19:51:21 +0000 (0:00:03.571) 0:00:15.468 ********* 2025-06-05 19:52:29.116381 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:52:29.116392 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-05 19:52:29.116403 | orchestrator | 2025-06-05 19:52:29.116413 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-05 19:52:29.116424 | orchestrator | Thursday 05 June 2025 19:51:26 +0000 (0:00:04.186) 0:00:19.655 ********* 2025-06-05 19:52:29.116525 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:52:29.116541 | orchestrator | 2025-06-05 19:52:29.116552 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-05 19:52:29.116562 | orchestrator | Thursday 05 June 2025 19:51:29 +0000 (0:00:03.718) 0:00:23.373 ********* 2025-06-05 19:52:29.116573 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-05 19:52:29.116584 | orchestrator | 2025-06-05 19:52:29.116595 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-05 19:52:29.116607 | orchestrator | Thursday 05 June 2025 19:51:34 +0000 (0:00:04.449) 0:00:27.823 ********* 2025-06-05 19:52:29.116626 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.116644 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:29.116662 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:29.116680 | orchestrator | 2025-06-05 19:52:29.116698 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-05 19:52:29.116717 | orchestrator | Thursday 05 June 2025 19:51:34 +0000 (0:00:00.252) 0:00:28.076 ********* 2025-06-05 19:52:29.116739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.116789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.116818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.116830 | orchestrator | 2025-06-05 19:52:29.116842 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-05 19:52:29.116853 | orchestrator | Thursday 05 June 2025 19:51:36 +0000 (0:00:01.662) 0:00:29.739 ********* 2025-06-05 19:52:29.116863 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.116874 | orchestrator | 2025-06-05 19:52:29.116885 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-05 19:52:29.116896 | orchestrator | Thursday 05 June 2025 19:51:36 +0000 (0:00:00.313) 0:00:30.053 ********* 2025-06-05 19:52:29.116907 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.116917 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:29.116928 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:29.116939 | orchestrator | 2025-06-05 19:52:29.116950 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-05 19:52:29.116960 | orchestrator | Thursday 05 June 2025 19:51:37 +0000 (0:00:01.138) 0:00:31.191 ********* 2025-06-05 19:52:29.116971 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:52:29.116982 | orchestrator | 2025-06-05 19:52:29.116993 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-05 19:52:29.117003 | orchestrator | Thursday 05 June 2025 19:51:38 +0000 (0:00:01.039) 0:00:32.231 ********* 2025-06-05 19:52:29.117015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117095 | orchestrator | 2025-06-05 19:52:29.117107 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-05 19:52:29.117118 | orchestrator | Thursday 05 June 2025 19:51:40 +0000 (0:00:02.198) 0:00:34.430 ********* 2025-06-05 19:52:29.117129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117141 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.117152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117165 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:29.117185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117205 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:29.117218 | orchestrator | 2025-06-05 19:52:29.117231 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-05 19:52:29.117243 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:00.676) 0:00:35.106 ********* 2025-06-05 19:52:29.117259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117270 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.117282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117293 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:29.117304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117316 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:29.117327 | orchestrator | 2025-06-05 19:52:29.117338 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-05 19:52:29.117349 | orchestrator | Thursday 05 June 2025 19:51:43 +0000 (0:00:01.960) 0:00:37.067 ********* 2025-06-05 19:52:29.117368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117416 | orchestrator | 2025-06-05 19:52:29.117427 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-05 19:52:29.117438 | orchestrator | Thursday 05 June 2025 19:51:45 +0000 (0:00:02.262) 0:00:39.329 ********* 2025-06-05 19:52:29.117449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117510 | orchestrator | 2025-06-05 19:52:29.117521 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-05 19:52:29.117532 | orchestrator | Thursday 05 June 2025 19:51:49 +0000 (0:00:03.680) 0:00:43.010 ********* 2025-06-05 19:52:29.117543 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-05 19:52:29.117554 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-05 19:52:29.117565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-05 19:52:29.117576 | orchestrator | 2025-06-05 19:52:29.117588 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-05 19:52:29.117598 | orchestrator | Thursday 05 June 2025 19:51:51 +0000 (0:00:01.629) 0:00:44.640 ********* 2025-06-05 19:52:29.117609 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.117620 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:29.117631 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:29.117642 | orchestrator | 2025-06-05 19:52:29.117653 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-05 19:52:29.117664 | orchestrator | Thursday 05 June 2025 19:51:52 +0000 (0:00:01.792) 0:00:46.432 ********* 2025-06-05 19:52:29.117675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117687 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:29.117698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117715 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:29.117739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-05 19:52:29.117751 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:29.117762 | orchestrator | 2025-06-05 19:52:29.117773 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-05 19:52:29.117784 | orchestrator | Thursday 05 June 2025 19:51:53 +0000 (0:00:00.916) 0:00:47.348 ********* 2025-06-05 19:52:29.117795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:29.117836 | orchestrator | 2025-06-05 19:52:29.117847 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-05 19:52:29.117858 | orchestrator | Thursday 05 June 2025 19:51:55 +0000 (0:00:01.395) 0:00:48.744 ********* 2025-06-05 19:52:29.117869 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.117880 | orchestrator | 2025-06-05 19:52:29.117891 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-05 19:52:29.117902 | orchestrator | Thursday 05 June 2025 19:51:57 +0000 (0:00:02.264) 0:00:51.008 ********* 2025-06-05 19:52:29.117913 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.117923 | orchestrator | 2025-06-05 19:52:29.117934 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-05 19:52:29.117945 | orchestrator | Thursday 05 June 2025 19:51:59 +0000 (0:00:02.446) 0:00:53.455 ********* 2025-06-05 19:52:29.117961 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.117973 | orchestrator | 2025-06-05 19:52:29.117984 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-05 19:52:29.117995 | orchestrator | Thursday 05 June 2025 19:52:15 +0000 (0:00:15.387) 0:01:08.843 ********* 2025-06-05 19:52:29.118005 | orchestrator | 2025-06-05 19:52:29.118107 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-05 19:52:29.118124 | orchestrator | Thursday 05 June 2025 19:52:15 +0000 (0:00:00.264) 0:01:09.107 ********* 2025-06-05 19:52:29.118135 | orchestrator | 2025-06-05 19:52:29.118146 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-05 19:52:29.118162 | orchestrator | Thursday 05 June 2025 19:52:15 +0000 (0:00:00.210) 0:01:09.317 ********* 2025-06-05 19:52:29.118173 | orchestrator | 2025-06-05 19:52:29.118184 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-05 19:52:29.118195 | orchestrator | Thursday 05 June 2025 19:52:15 +0000 (0:00:00.144) 0:01:09.461 ********* 2025-06-05 19:52:29.118206 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:29.118217 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:29.118228 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:29.118239 | orchestrator | 2025-06-05 19:52:29.118250 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:52:29.118262 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:52:29.118273 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:52:29.118284 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:52:29.118295 | orchestrator | 2025-06-05 19:52:29.118306 | orchestrator | 2025-06-05 19:52:29.118317 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:52:29.118328 | orchestrator | Thursday 05 June 2025 19:52:27 +0000 (0:00:11.901) 0:01:21.363 ********* 2025-06-05 19:52:29.118348 | orchestrator | =============================================================================== 2025-06-05 19:52:29.118359 | orchestrator | placement : Running placement bootstrap container ---------------------- 15.39s 2025-06-05 19:52:29.118370 | orchestrator | placement : Restart placement-api container ---------------------------- 11.90s 2025-06-05 19:52:29.118381 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.97s 2025-06-05 19:52:29.118391 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.45s 2025-06-05 19:52:29.118402 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.19s 2025-06-05 19:52:29.118413 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.72s 2025-06-05 19:52:29.118424 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.68s 2025-06-05 19:52:29.118435 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.57s 2025-06-05 19:52:29.118446 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.12s 2025-06-05 19:52:29.118457 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.45s 2025-06-05 19:52:29.118468 | orchestrator | placement : Creating placement databases -------------------------------- 2.26s 2025-06-05 19:52:29.118478 | orchestrator | placement : Copying over config.json files for services ----------------- 2.26s 2025-06-05 19:52:29.118489 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.20s 2025-06-05 19:52:29.118500 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.96s 2025-06-05 19:52:29.118511 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.79s 2025-06-05 19:52:29.118522 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.66s 2025-06-05 19:52:29.118533 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.63s 2025-06-05 19:52:29.118543 | orchestrator | placement : Check placement containers ---------------------------------- 1.40s 2025-06-05 19:52:29.118554 | orchestrator | placement : Set placement policy file ----------------------------------- 1.14s 2025-06-05 19:52:29.118565 | orchestrator | placement : include_tasks ----------------------------------------------- 1.04s 2025-06-05 19:52:29.118576 | orchestrator | 2025-06-05 19:52:29 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:29.118587 | orchestrator | 2025-06-05 19:52:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:32.142406 | orchestrator | 2025-06-05 19:52:32 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:32.142500 | orchestrator | 2025-06-05 19:52:32 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:32.143115 | orchestrator | 2025-06-05 19:52:32 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:32.143697 | orchestrator | 2025-06-05 19:52:32 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:32.143719 | orchestrator | 2025-06-05 19:52:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:35.166603 | orchestrator | 2025-06-05 19:52:35 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:35.166814 | orchestrator | 2025-06-05 19:52:35 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:35.167412 | orchestrator | 2025-06-05 19:52:35 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:35.167922 | orchestrator | 2025-06-05 19:52:35 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:35.168173 | orchestrator | 2025-06-05 19:52:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:38.187976 | orchestrator | 2025-06-05 19:52:38 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:38.188962 | orchestrator | 2025-06-05 19:52:38 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state STARTED 2025-06-05 19:52:38.188995 | orchestrator | 2025-06-05 19:52:38 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:38.189722 | orchestrator | 2025-06-05 19:52:38 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:38.189747 | orchestrator | 2025-06-05 19:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:41.210267 | orchestrator | 2025-06-05 19:52:41 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:41.210861 | orchestrator | 2025-06-05 19:52:41 | INFO  | Task 5cee23f8-4c93-4293-a93c-dcb39097a3bc is in state SUCCESS 2025-06-05 19:52:41.212405 | orchestrator | 2025-06-05 19:52:41.212491 | orchestrator | 2025-06-05 19:52:41.212505 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:52:41.212517 | orchestrator | 2025-06-05 19:52:41.212528 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:52:41.212540 | orchestrator | Thursday 05 June 2025 19:50:26 +0000 (0:00:00.337) 0:00:00.337 ********* 2025-06-05 19:52:41.212894 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:52:41.212910 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:52:41.212921 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:52:41.212932 | orchestrator | 2025-06-05 19:52:41.212943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:52:41.212954 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.421) 0:00:00.758 ********* 2025-06-05 19:52:41.212965 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-05 19:52:41.212976 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-05 19:52:41.212987 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-05 19:52:41.212998 | orchestrator | 2025-06-05 19:52:41.213008 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-05 19:52:41.213019 | orchestrator | 2025-06-05 19:52:41.213030 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-05 19:52:41.213041 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.498) 0:00:01.257 ********* 2025-06-05 19:52:41.213052 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:52:41.213121 | orchestrator | 2025-06-05 19:52:41.213132 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-05 19:52:41.213143 | orchestrator | Thursday 05 June 2025 19:50:28 +0000 (0:00:00.609) 0:00:01.867 ********* 2025-06-05 19:52:41.213155 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-05 19:52:41.213165 | orchestrator | 2025-06-05 19:52:41.213176 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-05 19:52:41.213187 | orchestrator | Thursday 05 June 2025 19:50:32 +0000 (0:00:03.913) 0:00:05.780 ********* 2025-06-05 19:52:41.213198 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-05 19:52:41.213208 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-05 19:52:41.213219 | orchestrator | 2025-06-05 19:52:41.213230 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-05 19:52:41.213241 | orchestrator | Thursday 05 June 2025 19:50:39 +0000 (0:00:07.037) 0:00:12.818 ********* 2025-06-05 19:52:41.213252 | orchestrator | FAILED - RETRYING: [testbed-node-0]: barbican | Creating projects (5 retries left). 2025-06-05 19:52:41.213263 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:52:41.213274 | orchestrator | 2025-06-05 19:52:41.213285 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-05 19:52:41.213320 | orchestrator | Thursday 05 June 2025 19:50:56 +0000 (0:00:17.277) 0:00:30.095 ********* 2025-06-05 19:52:41.213331 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:52:41.213342 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-05 19:52:41.213353 | orchestrator | 2025-06-05 19:52:41.213363 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-05 19:52:41.213374 | orchestrator | Thursday 05 June 2025 19:51:00 +0000 (0:00:04.339) 0:00:34.435 ********* 2025-06-05 19:52:41.213385 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:52:41.213396 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-05 19:52:41.213407 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-05 19:52:41.213418 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-05 19:52:41.213429 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-05 19:52:41.213439 | orchestrator | 2025-06-05 19:52:41.213483 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-05 19:52:41.213497 | orchestrator | Thursday 05 June 2025 19:51:17 +0000 (0:00:16.218) 0:00:50.653 ********* 2025-06-05 19:52:41.213509 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-05 19:52:41.213521 | orchestrator | 2025-06-05 19:52:41.213534 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-05 19:52:41.213546 | orchestrator | Thursday 05 June 2025 19:51:21 +0000 (0:00:04.748) 0:00:55.402 ********* 2025-06-05 19:52:41.213574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.213604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.213619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.213641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.213742 | orchestrator | 2025-06-05 19:52:41.213756 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-05 19:52:41.213768 | orchestrator | Thursday 05 June 2025 19:51:24 +0000 (0:00:02.305) 0:00:57.708 ********* 2025-06-05 19:52:41.213781 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-05 19:52:41.213794 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-05 19:52:41.213806 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-05 19:52:41.213818 | orchestrator | 2025-06-05 19:52:41.213832 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-05 19:52:41.213844 | orchestrator | Thursday 05 June 2025 19:51:25 +0000 (0:00:00.835) 0:00:58.544 ********* 2025-06-05 19:52:41.213856 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.213867 | orchestrator | 2025-06-05 19:52:41.213878 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-05 19:52:41.213889 | orchestrator | Thursday 05 June 2025 19:51:25 +0000 (0:00:00.291) 0:00:58.835 ********* 2025-06-05 19:52:41.213900 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.213911 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.213922 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.213932 | orchestrator | 2025-06-05 19:52:41.213943 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-05 19:52:41.213954 | orchestrator | Thursday 05 June 2025 19:51:26 +0000 (0:00:00.931) 0:00:59.766 ********* 2025-06-05 19:52:41.213965 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:52:41.213976 | orchestrator | 2025-06-05 19:52:41.213987 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-05 19:52:41.213997 | orchestrator | Thursday 05 June 2025 19:51:27 +0000 (0:00:00.836) 0:01:00.603 ********* 2025-06-05 19:52:41.214014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214219 | orchestrator | 2025-06-05 19:52:41.214231 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-05 19:52:41.214242 | orchestrator | Thursday 05 June 2025 19:51:30 +0000 (0:00:03.316) 0:01:03.920 ********* 2025-06-05 19:52:41.214254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214294 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.214312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214354 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.214365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214423 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.214434 | orchestrator | 2025-06-05 19:52:41.214445 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-05 19:52:41.214456 | orchestrator | Thursday 05 June 2025 19:51:31 +0000 (0:00:00.900) 0:01:04.820 ********* 2025-06-05 19:52:41.214467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214502 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.214517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214566 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.214578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.214589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.214612 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.214623 | orchestrator | 2025-06-05 19:52:41.214634 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-05 19:52:41.214645 | orchestrator | Thursday 05 June 2025 19:51:32 +0000 (0:00:01.564) 0:01:06.384 ********* 2025-06-05 19:52:41.214661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.214712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.214798 | orchestrator | 2025-06-05 19:52:41.214809 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-05 19:52:41.214820 | orchestrator | Thursday 05 June 2025 19:51:36 +0000 (0:00:03.755) 0:01:10.139 ********* 2025-06-05 19:52:41.214831 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:41.214842 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.214853 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:41.214864 | orchestrator | 2025-06-05 19:52:41.214875 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-05 19:52:41.214885 | orchestrator | Thursday 05 June 2025 19:51:39 +0000 (0:00:02.881) 0:01:13.021 ********* 2025-06-05 19:52:41.214896 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:52:41.214907 | orchestrator | 2025-06-05 19:52:41.214918 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-05 19:52:41.214929 | orchestrator | Thursday 05 June 2025 19:51:40 +0000 (0:00:01.430) 0:01:14.452 ********* 2025-06-05 19:52:41.214939 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.214950 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.214961 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.214972 | orchestrator | 2025-06-05 19:52:41.214982 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-05 19:52:41.214993 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:00.807) 0:01:15.260 ********* 2025-06-05 19:52:41.215008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215159 | orchestrator | 2025-06-05 19:52:41.215170 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-05 19:52:41.215181 | orchestrator | Thursday 05 June 2025 19:51:53 +0000 (0:00:11.315) 0:01:26.576 ********* 2025-06-05 19:52:41.215193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.215205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.215227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215279 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.215290 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.215301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-05 19:52:41.215318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:52:41.215345 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.215356 | orchestrator | 2025-06-05 19:52:41.215367 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-05 19:52:41.215378 | orchestrator | Thursday 05 June 2025 19:51:53 +0000 (0:00:00.869) 0:01:27.445 ********* 2025-06-05 19:52:41.215396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-05 19:52:41.215445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:52:41.215530 | orchestrator | 2025-06-05 19:52:41.215542 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-05 19:52:41.215553 | orchestrator | Thursday 05 June 2025 19:51:56 +0000 (0:00:02.721) 0:01:30.166 ********* 2025-06-05 19:52:41.215564 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:52:41.215575 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:52:41.215586 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:52:41.215596 | orchestrator | 2025-06-05 19:52:41.215607 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-05 19:52:41.215618 | orchestrator | Thursday 05 June 2025 19:51:56 +0000 (0:00:00.213) 0:01:30.380 ********* 2025-06-05 19:52:41.215629 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.215640 | orchestrator | 2025-06-05 19:52:41.215650 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-05 19:52:41.215661 | orchestrator | Thursday 05 June 2025 19:51:59 +0000 (0:00:02.163) 0:01:32.544 ********* 2025-06-05 19:52:41.215672 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.215682 | orchestrator | 2025-06-05 19:52:41.215693 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-05 19:52:41.215704 | orchestrator | Thursday 05 June 2025 19:52:01 +0000 (0:00:02.370) 0:01:34.915 ********* 2025-06-05 19:52:41.215715 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.215725 | orchestrator | 2025-06-05 19:52:41.215736 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-05 19:52:41.215751 | orchestrator | Thursday 05 June 2025 19:52:13 +0000 (0:00:12.444) 0:01:47.359 ********* 2025-06-05 19:52:41.215762 | orchestrator | 2025-06-05 19:52:41.215773 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-05 19:52:41.215784 | orchestrator | Thursday 05 June 2025 19:52:13 +0000 (0:00:00.067) 0:01:47.427 ********* 2025-06-05 19:52:41.215795 | orchestrator | 2025-06-05 19:52:41.215806 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-05 19:52:41.215816 | orchestrator | Thursday 05 June 2025 19:52:14 +0000 (0:00:00.067) 0:01:47.495 ********* 2025-06-05 19:52:41.215827 | orchestrator | 2025-06-05 19:52:41.215838 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-05 19:52:41.215848 | orchestrator | Thursday 05 June 2025 19:52:14 +0000 (0:00:00.067) 0:01:47.563 ********* 2025-06-05 19:52:41.215859 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.215870 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:41.215880 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:41.215891 | orchestrator | 2025-06-05 19:52:41.215902 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-05 19:52:41.215913 | orchestrator | Thursday 05 June 2025 19:52:22 +0000 (0:00:08.478) 0:01:56.042 ********* 2025-06-05 19:52:41.215929 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.215941 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:41.215952 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:41.215962 | orchestrator | 2025-06-05 19:52:41.215973 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-05 19:52:41.215984 | orchestrator | Thursday 05 June 2025 19:52:28 +0000 (0:00:06.287) 0:02:02.329 ********* 2025-06-05 19:52:41.215995 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:52:41.216005 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:52:41.216023 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:52:41.216034 | orchestrator | 2025-06-05 19:52:41.216045 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:52:41.216069 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-05 19:52:41.216081 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:52:41.216093 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:52:41.216104 | orchestrator | 2025-06-05 19:52:41.216115 | orchestrator | 2025-06-05 19:52:41.216125 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:52:41.216136 | orchestrator | Thursday 05 June 2025 19:52:37 +0000 (0:00:08.880) 0:02:11.209 ********* 2025-06-05 19:52:41.216147 | orchestrator | =============================================================================== 2025-06-05 19:52:41.216158 | orchestrator | service-ks-register : barbican | Creating projects --------------------- 17.28s 2025-06-05 19:52:41.216169 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.22s 2025-06-05 19:52:41.216180 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.44s 2025-06-05 19:52:41.216190 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.32s 2025-06-05 19:52:41.216201 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.88s 2025-06-05 19:52:41.216212 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.48s 2025-06-05 19:52:41.216223 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.04s 2025-06-05 19:52:41.216234 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 6.29s 2025-06-05 19:52:41.216244 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.75s 2025-06-05 19:52:41.216255 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.34s 2025-06-05 19:52:41.216266 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.91s 2025-06-05 19:52:41.216277 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.76s 2025-06-05 19:52:41.216288 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.32s 2025-06-05 19:52:41.216299 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.88s 2025-06-05 19:52:41.216309 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.72s 2025-06-05 19:52:41.216320 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.37s 2025-06-05 19:52:41.216331 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.31s 2025-06-05 19:52:41.216342 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.16s 2025-06-05 19:52:41.216353 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.56s 2025-06-05 19:52:41.216364 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.43s 2025-06-05 19:52:41.216374 | orchestrator | 2025-06-05 19:52:41 | INFO  | Task 21de9a88-5da1-43ea-886e-f391c48ca7df is in state STARTED 2025-06-05 19:52:41.216385 | orchestrator | 2025-06-05 19:52:41 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:41.216396 | orchestrator | 2025-06-05 19:52:41 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:41.216407 | orchestrator | 2025-06-05 19:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:44.240804 | orchestrator | 2025-06-05 19:52:44 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:44.241635 | orchestrator | 2025-06-05 19:52:44 | INFO  | Task 21de9a88-5da1-43ea-886e-f391c48ca7df is in state STARTED 2025-06-05 19:52:44.242749 | orchestrator | 2025-06-05 19:52:44 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:44.243682 | orchestrator | 2025-06-05 19:52:44 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:44.243699 | orchestrator | 2025-06-05 19:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:47.283473 | orchestrator | 2025-06-05 19:52:47 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:47.283886 | orchestrator | 2025-06-05 19:52:47 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:52:47.285792 | orchestrator | 2025-06-05 19:52:47 | INFO  | Task 21de9a88-5da1-43ea-886e-f391c48ca7df is in state SUCCESS 2025-06-05 19:52:47.287111 | orchestrator | 2025-06-05 19:52:47 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:47.288625 | orchestrator | 2025-06-05 19:52:47 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:47.288661 | orchestrator | 2025-06-05 19:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:50.334752 | orchestrator | 2025-06-05 19:52:50 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:50.334985 | orchestrator | 2025-06-05 19:52:50 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:52:50.337416 | orchestrator | 2025-06-05 19:52:50 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:50.339586 | orchestrator | 2025-06-05 19:52:50 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:50.340258 | orchestrator | 2025-06-05 19:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:53.386581 | orchestrator | 2025-06-05 19:52:53 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:53.386663 | orchestrator | 2025-06-05 19:52:53 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:52:53.387693 | orchestrator | 2025-06-05 19:52:53 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:53.389065 | orchestrator | 2025-06-05 19:52:53 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:53.389150 | orchestrator | 2025-06-05 19:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:56.425875 | orchestrator | 2025-06-05 19:52:56 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:56.425958 | orchestrator | 2025-06-05 19:52:56 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:52:56.426622 | orchestrator | 2025-06-05 19:52:56 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:56.427776 | orchestrator | 2025-06-05 19:52:56 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:56.428644 | orchestrator | 2025-06-05 19:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:52:59.462429 | orchestrator | 2025-06-05 19:52:59 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:52:59.463539 | orchestrator | 2025-06-05 19:52:59 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:52:59.464244 | orchestrator | 2025-06-05 19:52:59 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:52:59.464879 | orchestrator | 2025-06-05 19:52:59 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:52:59.464929 | orchestrator | 2025-06-05 19:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:02.509518 | orchestrator | 2025-06-05 19:53:02 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:02.511357 | orchestrator | 2025-06-05 19:53:02 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:02.512070 | orchestrator | 2025-06-05 19:53:02 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:02.514286 | orchestrator | 2025-06-05 19:53:02 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:02.514397 | orchestrator | 2025-06-05 19:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:05.559091 | orchestrator | 2025-06-05 19:53:05 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state STARTED 2025-06-05 19:53:05.559734 | orchestrator | 2025-06-05 19:53:05 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:05.561880 | orchestrator | 2025-06-05 19:53:05 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:05.563593 | orchestrator | 2025-06-05 19:53:05 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:05.565574 | orchestrator | 2025-06-05 19:53:05 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:05.565602 | orchestrator | 2025-06-05 19:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:08.596787 | orchestrator | 2025-06-05 19:53:08 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state STARTED 2025-06-05 19:53:08.598471 | orchestrator | 2025-06-05 19:53:08 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:08.600320 | orchestrator | 2025-06-05 19:53:08 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:08.601912 | orchestrator | 2025-06-05 19:53:08 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:08.603677 | orchestrator | 2025-06-05 19:53:08 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:08.604007 | orchestrator | 2025-06-05 19:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:11.640406 | orchestrator | 2025-06-05 19:53:11 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state STARTED 2025-06-05 19:53:11.641511 | orchestrator | 2025-06-05 19:53:11 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:11.643029 | orchestrator | 2025-06-05 19:53:11 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:11.645378 | orchestrator | 2025-06-05 19:53:11 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:11.646968 | orchestrator | 2025-06-05 19:53:11 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:11.646992 | orchestrator | 2025-06-05 19:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:14.687469 | orchestrator | 2025-06-05 19:53:14 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state STARTED 2025-06-05 19:53:14.687558 | orchestrator | 2025-06-05 19:53:14 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:14.687574 | orchestrator | 2025-06-05 19:53:14 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:14.687906 | orchestrator | 2025-06-05 19:53:14 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:14.690807 | orchestrator | 2025-06-05 19:53:14 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:14.690854 | orchestrator | 2025-06-05 19:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:17.722603 | orchestrator | 2025-06-05 19:53:17 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state STARTED 2025-06-05 19:53:17.722692 | orchestrator | 2025-06-05 19:53:17 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:17.722707 | orchestrator | 2025-06-05 19:53:17 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:17.723082 | orchestrator | 2025-06-05 19:53:17 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:17.723897 | orchestrator | 2025-06-05 19:53:17 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:17.723922 | orchestrator | 2025-06-05 19:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:20.765804 | orchestrator | 2025-06-05 19:53:20 | INFO  | Task ff68d6b0-5ea3-4fcc-bce6-af43ce0569f2 is in state SUCCESS 2025-06-05 19:53:20.765886 | orchestrator | 2025-06-05 19:53:20 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:20.765901 | orchestrator | 2025-06-05 19:53:20 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:20.768237 | orchestrator | 2025-06-05 19:53:20 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:20.768285 | orchestrator | 2025-06-05 19:53:20 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:20.768298 | orchestrator | 2025-06-05 19:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:23.794688 | orchestrator | 2025-06-05 19:53:23 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state STARTED 2025-06-05 19:53:23.794900 | orchestrator | 2025-06-05 19:53:23 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:23.794933 | orchestrator | 2025-06-05 19:53:23 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:23.795660 | orchestrator | 2025-06-05 19:53:23 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:23.795692 | orchestrator | 2025-06-05 19:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:26.831402 | orchestrator | 2025-06-05 19:53:26 | INFO  | Task 99ef3b17-828b-4fa4-887a-8f58929ce4a4 is in state SUCCESS 2025-06-05 19:53:26.832291 | orchestrator | 2025-06-05 19:53:26.832905 | orchestrator | 2025-06-05 19:53:26.832922 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:53:26.832933 | orchestrator | 2025-06-05 19:53:26.832945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:53:26.832956 | orchestrator | Thursday 05 June 2025 19:52:43 +0000 (0:00:00.132) 0:00:00.132 ********* 2025-06-05 19:53:26.833531 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:53:26.833904 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:53:26.833916 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:53:26.833927 | orchestrator | 2025-06-05 19:53:26.833939 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:53:26.833950 | orchestrator | Thursday 05 June 2025 19:52:44 +0000 (0:00:00.223) 0:00:00.356 ********* 2025-06-05 19:53:26.833961 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-05 19:53:26.833973 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-05 19:53:26.833984 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-05 19:53:26.833994 | orchestrator | 2025-06-05 19:53:26.834005 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-05 19:53:26.834074 | orchestrator | 2025-06-05 19:53:26.834089 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-05 19:53:26.834101 | orchestrator | Thursday 05 June 2025 19:52:44 +0000 (0:00:00.486) 0:00:00.842 ********* 2025-06-05 19:53:26.834112 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:53:26.834123 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:53:26.834134 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:53:26.834144 | orchestrator | 2025-06-05 19:53:26.834190 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:53:26.834202 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:53:26.834214 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:53:26.834225 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:53:26.834236 | orchestrator | 2025-06-05 19:53:26.834247 | orchestrator | 2025-06-05 19:53:26.834258 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:53:26.834269 | orchestrator | Thursday 05 June 2025 19:52:45 +0000 (0:00:00.781) 0:00:01.624 ********* 2025-06-05 19:53:26.834280 | orchestrator | =============================================================================== 2025-06-05 19:53:26.834291 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2025-06-05 19:53:26.834302 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-06-05 19:53:26.834313 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.22s 2025-06-05 19:53:26.834323 | orchestrator | 2025-06-05 19:53:26.834334 | orchestrator | None 2025-06-05 19:53:26.834345 | orchestrator | 2025-06-05 19:53:26.834356 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:53:26.834367 | orchestrator | 2025-06-05 19:53:26.834378 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:53:26.834389 | orchestrator | Thursday 05 June 2025 19:50:26 +0000 (0:00:00.318) 0:00:00.318 ********* 2025-06-05 19:53:26.834400 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:53:26.834410 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:53:26.834421 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:53:26.834432 | orchestrator | 2025-06-05 19:53:26.834443 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:53:26.834454 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.401) 0:00:00.720 ********* 2025-06-05 19:53:26.834465 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-05 19:53:26.834476 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-05 19:53:26.834488 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-05 19:53:26.834498 | orchestrator | 2025-06-05 19:53:26.834509 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-05 19:53:26.834521 | orchestrator | 2025-06-05 19:53:26.834533 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-05 19:53:26.834546 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.539) 0:00:01.259 ********* 2025-06-05 19:53:26.834559 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:53:26.834571 | orchestrator | 2025-06-05 19:53:26.834583 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-05 19:53:26.834596 | orchestrator | Thursday 05 June 2025 19:50:28 +0000 (0:00:00.566) 0:00:01.826 ********* 2025-06-05 19:53:26.834619 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-05 19:53:26.834632 | orchestrator | 2025-06-05 19:53:26.834644 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-05 19:53:26.834657 | orchestrator | Thursday 05 June 2025 19:50:32 +0000 (0:00:03.906) 0:00:05.733 ********* 2025-06-05 19:53:26.834676 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-05 19:53:26.834689 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-05 19:53:26.834701 | orchestrator | 2025-06-05 19:53:26.834714 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-05 19:53:26.834725 | orchestrator | Thursday 05 June 2025 19:50:39 +0000 (0:00:06.941) 0:00:12.675 ********* 2025-06-05 19:53:26.834736 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-05 19:53:26.834747 | orchestrator | 2025-06-05 19:53:26.834758 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-05 19:53:26.834769 | orchestrator | Thursday 05 June 2025 19:50:43 +0000 (0:00:03.757) 0:00:16.432 ********* 2025-06-05 19:53:26.834824 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:53:26.834837 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-05 19:53:26.834848 | orchestrator | 2025-06-05 19:53:26.834859 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-05 19:53:26.834870 | orchestrator | Thursday 05 June 2025 19:50:47 +0000 (0:00:04.645) 0:00:21.078 ********* 2025-06-05 19:53:26.834881 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:53:26.834892 | orchestrator | 2025-06-05 19:53:26.834903 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-05 19:53:26.834914 | orchestrator | Thursday 05 June 2025 19:50:51 +0000 (0:00:03.689) 0:00:24.767 ********* 2025-06-05 19:53:26.834925 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-05 19:53:26.834936 | orchestrator | 2025-06-05 19:53:26.834947 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-05 19:53:26.834957 | orchestrator | Thursday 05 June 2025 19:50:55 +0000 (0:00:04.393) 0:00:29.160 ********* 2025-06-05 19:53:26.834971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.834987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.835005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.835023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835413 | orchestrator | 2025-06-05 19:53:26.835425 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-05 19:53:26.835436 | orchestrator | Thursday 05 June 2025 19:50:58 +0000 (0:00:03.052) 0:00:32.213 ********* 2025-06-05 19:53:26.835447 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.835459 | orchestrator | 2025-06-05 19:53:26.835470 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-05 19:53:26.835481 | orchestrator | Thursday 05 June 2025 19:50:58 +0000 (0:00:00.124) 0:00:32.338 ********* 2025-06-05 19:53:26.835491 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.835502 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.835513 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.835524 | orchestrator | 2025-06-05 19:53:26.835535 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-05 19:53:26.835545 | orchestrator | Thursday 05 June 2025 19:50:59 +0000 (0:00:00.277) 0:00:32.615 ********* 2025-06-05 19:53:26.835556 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:53:26.835567 | orchestrator | 2025-06-05 19:53:26.835578 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-05 19:53:26.835589 | orchestrator | Thursday 05 June 2025 19:50:59 +0000 (0:00:00.670) 0:00:33.286 ********* 2025-06-05 19:53:26.835600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.835619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.835636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.835679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.835961 | orchestrator | 2025-06-05 19:53:26.835973 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-05 19:53:26.835985 | orchestrator | Thursday 05 June 2025 19:51:06 +0000 (0:00:06.393) 0:00:39.679 ********* 2025-06-05 19:53:26.835999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836125 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.836137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836284 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.836295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836409 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.836420 | orchestrator | 2025-06-05 19:53:26.836431 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-05 19:53:26.836442 | orchestrator | Thursday 05 June 2025 19:51:07 +0000 (0:00:01.302) 0:00:40.982 ********* 2025-06-05 19:53:26.836454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836565 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.836577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836689 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.836701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.836712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.836724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.836815 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.836826 | orchestrator | 2025-06-05 19:53:26.836837 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-05 19:53:26.836848 | orchestrator | Thursday 05 June 2025 19:51:08 +0000 (0:00:01.142) 0:00:42.124 ********* 2025-06-05 19:53:26.836860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.836871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.836883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.836926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.836946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.836957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837339 | orchestrator | 2025-06-05 19:53:26.837351 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-05 19:53:26.837362 | orchestrator | Thursday 05 June 2025 19:51:15 +0000 (0:00:06.592) 0:00:48.716 ********* 2025-06-05 19:53:26.837374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.837386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.837402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.837426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837638 | orchestrator | 2025-06-05 19:53:26.837649 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-05 19:53:26.837661 | orchestrator | Thursday 05 June 2025 19:51:34 +0000 (0:00:19.089) 0:01:07.805 ********* 2025-06-05 19:53:26.837672 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-05 19:53:26.837683 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-05 19:53:26.837694 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-05 19:53:26.837705 | orchestrator | 2025-06-05 19:53:26.837716 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-05 19:53:26.837726 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:07.053) 0:01:14.859 ********* 2025-06-05 19:53:26.837737 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-05 19:53:26.837748 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-05 19:53:26.837759 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-05 19:53:26.837770 | orchestrator | 2025-06-05 19:53:26.837781 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-05 19:53:26.837791 | orchestrator | Thursday 05 June 2025 19:51:46 +0000 (0:00:04.827) 0:01:19.686 ********* 2025-06-05 19:53:26.837802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.837824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.837842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.837854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.837980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.837997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838115 | orchestrator | 2025-06-05 19:53:26.838126 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-05 19:53:26.838137 | orchestrator | Thursday 05 June 2025 19:51:50 +0000 (0:00:04.015) 0:01:23.702 ********* 2025-06-05 19:53:26.838209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838479 | orchestrator | 2025-06-05 19:53:26.838490 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-05 19:53:26.838507 | orchestrator | Thursday 05 June 2025 19:51:54 +0000 (0:00:03.840) 0:01:27.542 ********* 2025-06-05 19:53:26.838518 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.838530 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.838541 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.838551 | orchestrator | 2025-06-05 19:53:26.838562 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-05 19:53:26.838573 | orchestrator | Thursday 05 June 2025 19:51:54 +0000 (0:00:00.493) 0:01:28.036 ********* 2025-06-05 19:53:26.838584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.838613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838669 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.838679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.838704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838756 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.838766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-05 19:53:26.838776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-05 19:53:26.838790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-05 19:53:26.838844 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.838854 | orchestrator | 2025-06-05 19:53:26.838864 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-05 19:53:26.838874 | orchestrator | Thursday 05 June 2025 19:51:55 +0000 (0:00:00.812) 0:01:28.848 ********* 2025-06-05 19:53:26.838884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.838895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.838914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-05 19:53:26.838924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.838999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-05 19:53:26.839111 | orchestrator | 2025-06-05 19:53:26.839121 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-05 19:53:26.839131 | orchestrator | Thursday 05 June 2025 19:52:00 +0000 (0:00:04.793) 0:01:33.642 ********* 2025-06-05 19:53:26.839141 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:53:26.839166 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:53:26.839176 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:53:26.839186 | orchestrator | 2025-06-05 19:53:26.839196 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-05 19:53:26.839205 | orchestrator | Thursday 05 June 2025 19:52:00 +0000 (0:00:00.291) 0:01:33.934 ********* 2025-06-05 19:53:26.839215 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-05 19:53:26.839225 | orchestrator | 2025-06-05 19:53:26.839235 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-05 19:53:26.839245 | orchestrator | Thursday 05 June 2025 19:52:03 +0000 (0:00:02.584) 0:01:36.518 ********* 2025-06-05 19:53:26.839254 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 19:53:26.839264 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-05 19:53:26.839274 | orchestrator | 2025-06-05 19:53:26.839283 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-05 19:53:26.839293 | orchestrator | Thursday 05 June 2025 19:52:05 +0000 (0:00:02.463) 0:01:38.982 ********* 2025-06-05 19:53:26.839302 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839312 | orchestrator | 2025-06-05 19:53:26.839322 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-05 19:53:26.839331 | orchestrator | Thursday 05 June 2025 19:52:20 +0000 (0:00:14.839) 0:01:53.822 ********* 2025-06-05 19:53:26.839341 | orchestrator | 2025-06-05 19:53:26.839351 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-05 19:53:26.839360 | orchestrator | Thursday 05 June 2025 19:52:20 +0000 (0:00:00.060) 0:01:53.882 ********* 2025-06-05 19:53:26.839370 | orchestrator | 2025-06-05 19:53:26.839379 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-05 19:53:26.839389 | orchestrator | Thursday 05 June 2025 19:52:20 +0000 (0:00:00.058) 0:01:53.941 ********* 2025-06-05 19:53:26.839398 | orchestrator | 2025-06-05 19:53:26.839408 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-05 19:53:26.839418 | orchestrator | Thursday 05 June 2025 19:52:20 +0000 (0:00:00.061) 0:01:54.003 ********* 2025-06-05 19:53:26.839427 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839437 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839452 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839461 | orchestrator | 2025-06-05 19:53:26.839471 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-05 19:53:26.839481 | orchestrator | Thursday 05 June 2025 19:52:30 +0000 (0:00:09.867) 0:02:03.871 ********* 2025-06-05 19:53:26.839490 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839500 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839510 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839519 | orchestrator | 2025-06-05 19:53:26.839529 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-05 19:53:26.839542 | orchestrator | Thursday 05 June 2025 19:52:37 +0000 (0:00:07.105) 0:02:10.979 ********* 2025-06-05 19:53:26.839552 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839562 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839571 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839581 | orchestrator | 2025-06-05 19:53:26.839590 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-05 19:53:26.839600 | orchestrator | Thursday 05 June 2025 19:52:44 +0000 (0:00:06.819) 0:02:17.798 ********* 2025-06-05 19:53:26.839610 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839619 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839629 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839639 | orchestrator | 2025-06-05 19:53:26.839648 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-05 19:53:26.839658 | orchestrator | Thursday 05 June 2025 19:52:53 +0000 (0:00:09.117) 0:02:26.916 ********* 2025-06-05 19:53:26.839668 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839677 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839687 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839697 | orchestrator | 2025-06-05 19:53:26.839706 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-05 19:53:26.839721 | orchestrator | Thursday 05 June 2025 19:53:03 +0000 (0:00:09.974) 0:02:36.890 ********* 2025-06-05 19:53:26.839731 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:53:26.839741 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839750 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:53:26.839760 | orchestrator | 2025-06-05 19:53:26.839770 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-05 19:53:26.839779 | orchestrator | Thursday 05 June 2025 19:53:17 +0000 (0:00:13.605) 0:02:50.495 ********* 2025-06-05 19:53:26.839789 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:53:26.839799 | orchestrator | 2025-06-05 19:53:26.839808 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:53:26.839818 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-05 19:53:26.839829 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:53:26.839839 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:53:26.839848 | orchestrator | 2025-06-05 19:53:26.839858 | orchestrator | 2025-06-05 19:53:26.839868 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:53:26.839877 | orchestrator | Thursday 05 June 2025 19:53:25 +0000 (0:00:08.153) 0:02:58.649 ********* 2025-06-05 19:53:26.839887 | orchestrator | =============================================================================== 2025-06-05 19:53:26.839897 | orchestrator | designate : Copying over designate.conf -------------------------------- 19.09s 2025-06-05 19:53:26.839906 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.84s 2025-06-05 19:53:26.839916 | orchestrator | designate : Restart designate-worker container ------------------------- 13.61s 2025-06-05 19:53:26.839931 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.97s 2025-06-05 19:53:26.839941 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.87s 2025-06-05 19:53:26.839950 | orchestrator | designate : Restart designate-producer container ------------------------ 9.12s 2025-06-05 19:53:26.839960 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.15s 2025-06-05 19:53:26.839970 | orchestrator | designate : Restart designate-api container ----------------------------- 7.11s 2025-06-05 19:53:26.839979 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.05s 2025-06-05 19:53:26.839989 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.94s 2025-06-05 19:53:26.839998 | orchestrator | designate : Restart designate-central container ------------------------- 6.82s 2025-06-05 19:53:26.840008 | orchestrator | designate : Copying over config.json files for services ----------------- 6.59s 2025-06-05 19:53:26.840018 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.39s 2025-06-05 19:53:26.840027 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.83s 2025-06-05 19:53:26.840037 | orchestrator | designate : Check designate containers ---------------------------------- 4.79s 2025-06-05 19:53:26.840046 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.65s 2025-06-05 19:53:26.840056 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.39s 2025-06-05 19:53:26.840066 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.02s 2025-06-05 19:53:26.840075 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.91s 2025-06-05 19:53:26.840085 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.84s 2025-06-05 19:53:26.840095 | orchestrator | 2025-06-05 19:53:26 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:26.840104 | orchestrator | 2025-06-05 19:53:26 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:26.840114 | orchestrator | 2025-06-05 19:53:26 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:26.840124 | orchestrator | 2025-06-05 19:53:26 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:26.840138 | orchestrator | 2025-06-05 19:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:29.862946 | orchestrator | 2025-06-05 19:53:29 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:29.863567 | orchestrator | 2025-06-05 19:53:29 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:29.866941 | orchestrator | 2025-06-05 19:53:29 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:29.872504 | orchestrator | 2025-06-05 19:53:29 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:29.872569 | orchestrator | 2025-06-05 19:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:32.919425 | orchestrator | 2025-06-05 19:53:32 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:32.919857 | orchestrator | 2025-06-05 19:53:32 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:32.923485 | orchestrator | 2025-06-05 19:53:32 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:32.924298 | orchestrator | 2025-06-05 19:53:32 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:32.924324 | orchestrator | 2025-06-05 19:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:35.959435 | orchestrator | 2025-06-05 19:53:35 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:35.960013 | orchestrator | 2025-06-05 19:53:35 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:35.961593 | orchestrator | 2025-06-05 19:53:35 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:35.962629 | orchestrator | 2025-06-05 19:53:35 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:35.962673 | orchestrator | 2025-06-05 19:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:38.990713 | orchestrator | 2025-06-05 19:53:38 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:38.991148 | orchestrator | 2025-06-05 19:53:38 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:38.992381 | orchestrator | 2025-06-05 19:53:38 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:38.993211 | orchestrator | 2025-06-05 19:53:38 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:38.994343 | orchestrator | 2025-06-05 19:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:42.025356 | orchestrator | 2025-06-05 19:53:42 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:42.025845 | orchestrator | 2025-06-05 19:53:42 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:42.027142 | orchestrator | 2025-06-05 19:53:42 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:42.029043 | orchestrator | 2025-06-05 19:53:42 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:42.029079 | orchestrator | 2025-06-05 19:53:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:45.064274 | orchestrator | 2025-06-05 19:53:45 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:45.064674 | orchestrator | 2025-06-05 19:53:45 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:45.065128 | orchestrator | 2025-06-05 19:53:45 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:45.065905 | orchestrator | 2025-06-05 19:53:45 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:45.065930 | orchestrator | 2025-06-05 19:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:48.110075 | orchestrator | 2025-06-05 19:53:48 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:48.110167 | orchestrator | 2025-06-05 19:53:48 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:48.110957 | orchestrator | 2025-06-05 19:53:48 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:48.112546 | orchestrator | 2025-06-05 19:53:48 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:48.112592 | orchestrator | 2025-06-05 19:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:51.163453 | orchestrator | 2025-06-05 19:53:51 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:51.165760 | orchestrator | 2025-06-05 19:53:51 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:51.168352 | orchestrator | 2025-06-05 19:53:51 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:51.170124 | orchestrator | 2025-06-05 19:53:51 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:51.170279 | orchestrator | 2025-06-05 19:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:54.224629 | orchestrator | 2025-06-05 19:53:54 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:54.227383 | orchestrator | 2025-06-05 19:53:54 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:54.228790 | orchestrator | 2025-06-05 19:53:54 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:54.231304 | orchestrator | 2025-06-05 19:53:54 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:54.231327 | orchestrator | 2025-06-05 19:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:53:57.274161 | orchestrator | 2025-06-05 19:53:57 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state STARTED 2025-06-05 19:53:57.277284 | orchestrator | 2025-06-05 19:53:57 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:53:57.280125 | orchestrator | 2025-06-05 19:53:57 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:53:57.283272 | orchestrator | 2025-06-05 19:53:57 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:53:57.283322 | orchestrator | 2025-06-05 19:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:00.321387 | orchestrator | 2025-06-05 19:54:00 | INFO  | Task 73b76bb9-6ecb-4ef0-8ea3-cc4a9741870c is in state SUCCESS 2025-06-05 19:54:00.327379 | orchestrator | 2025-06-05 19:54:00 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:00.329538 | orchestrator | 2025-06-05 19:54:00 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:00.331078 | orchestrator | 2025-06-05 19:54:00 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:00.332674 | orchestrator | 2025-06-05 19:54:00 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:00.332981 | orchestrator | 2025-06-05 19:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:03.377020 | orchestrator | 2025-06-05 19:54:03 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:03.377118 | orchestrator | 2025-06-05 19:54:03 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:03.377132 | orchestrator | 2025-06-05 19:54:03 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:03.377145 | orchestrator | 2025-06-05 19:54:03 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:03.377156 | orchestrator | 2025-06-05 19:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:06.410634 | orchestrator | 2025-06-05 19:54:06 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:06.411917 | orchestrator | 2025-06-05 19:54:06 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:06.412716 | orchestrator | 2025-06-05 19:54:06 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:06.413747 | orchestrator | 2025-06-05 19:54:06 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:06.413791 | orchestrator | 2025-06-05 19:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:09.436864 | orchestrator | 2025-06-05 19:54:09 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:09.437021 | orchestrator | 2025-06-05 19:54:09 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:09.437531 | orchestrator | 2025-06-05 19:54:09 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:09.439655 | orchestrator | 2025-06-05 19:54:09 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:09.439685 | orchestrator | 2025-06-05 19:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:12.460095 | orchestrator | 2025-06-05 19:54:12 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:12.460581 | orchestrator | 2025-06-05 19:54:12 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:12.463130 | orchestrator | 2025-06-05 19:54:12 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:12.463998 | orchestrator | 2025-06-05 19:54:12 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:12.464192 | orchestrator | 2025-06-05 19:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:15.505066 | orchestrator | 2025-06-05 19:54:15 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:15.506721 | orchestrator | 2025-06-05 19:54:15 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:15.507493 | orchestrator | 2025-06-05 19:54:15 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:15.507674 | orchestrator | 2025-06-05 19:54:15 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:15.507696 | orchestrator | 2025-06-05 19:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:18.558563 | orchestrator | 2025-06-05 19:54:18 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:18.560421 | orchestrator | 2025-06-05 19:54:18 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:18.562404 | orchestrator | 2025-06-05 19:54:18 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:18.565682 | orchestrator | 2025-06-05 19:54:18 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:18.565710 | orchestrator | 2025-06-05 19:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:21.608743 | orchestrator | 2025-06-05 19:54:21 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:21.610611 | orchestrator | 2025-06-05 19:54:21 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:21.611523 | orchestrator | 2025-06-05 19:54:21 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:21.617029 | orchestrator | 2025-06-05 19:54:21 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:21.617064 | orchestrator | 2025-06-05 19:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:24.661874 | orchestrator | 2025-06-05 19:54:24 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:24.664901 | orchestrator | 2025-06-05 19:54:24 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:24.666328 | orchestrator | 2025-06-05 19:54:24 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:24.668018 | orchestrator | 2025-06-05 19:54:24 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:24.668045 | orchestrator | 2025-06-05 19:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:27.710553 | orchestrator | 2025-06-05 19:54:27 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:27.711998 | orchestrator | 2025-06-05 19:54:27 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:27.716778 | orchestrator | 2025-06-05 19:54:27 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state STARTED 2025-06-05 19:54:27.719445 | orchestrator | 2025-06-05 19:54:27 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:27.719491 | orchestrator | 2025-06-05 19:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:30.763180 | orchestrator | 2025-06-05 19:54:30 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:30.763805 | orchestrator | 2025-06-05 19:54:30 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:30.765723 | orchestrator | 2025-06-05 19:54:30 | INFO  | Task 159a67f8-7b8e-46f2-9eca-2b1b3b999ca0 is in state SUCCESS 2025-06-05 19:54:30.767480 | orchestrator | 2025-06-05 19:54:30.767530 | orchestrator | 2025-06-05 19:54:30.767543 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:54:30.767556 | orchestrator | 2025-06-05 19:54:30.767568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:54:30.767891 | orchestrator | Thursday 05 June 2025 19:53:29 +0000 (0:00:00.244) 0:00:00.244 ********* 2025-06-05 19:54:30.767913 | orchestrator | ok: [testbed-manager] 2025-06-05 19:54:30.767926 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:54:30.767937 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:54:30.767948 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:54:30.767959 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:54:30.767970 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:54:30.767981 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:54:30.767994 | orchestrator | 2025-06-05 19:54:30.768006 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:54:30.768018 | orchestrator | Thursday 05 June 2025 19:53:30 +0000 (0:00:00.653) 0:00:00.897 ********* 2025-06-05 19:54:30.768030 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768042 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768053 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768064 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768076 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768087 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768098 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-05 19:54:30.768110 | orchestrator | 2025-06-05 19:54:30.768121 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-05 19:54:30.768133 | orchestrator | 2025-06-05 19:54:30.768144 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-05 19:54:30.768156 | orchestrator | Thursday 05 June 2025 19:53:31 +0000 (0:00:00.497) 0:00:01.395 ********* 2025-06-05 19:54:30.768168 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:54:30.768181 | orchestrator | 2025-06-05 19:54:30.768193 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-05 19:54:30.768204 | orchestrator | Thursday 05 June 2025 19:53:32 +0000 (0:00:01.156) 0:00:02.552 ********* 2025-06-05 19:54:30.768215 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-06-05 19:54:30.768227 | orchestrator | 2025-06-05 19:54:30.768238 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-05 19:54:30.768250 | orchestrator | Thursday 05 June 2025 19:53:35 +0000 (0:00:02.998) 0:00:05.550 ********* 2025-06-05 19:54:30.768306 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-05 19:54:30.768320 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-05 19:54:30.768330 | orchestrator | 2025-06-05 19:54:30.768341 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-05 19:54:30.768352 | orchestrator | Thursday 05 June 2025 19:53:40 +0000 (0:00:05.436) 0:00:10.986 ********* 2025-06-05 19:54:30.768363 | orchestrator | ok: [testbed-manager] => (item=service) 2025-06-05 19:54:30.768374 | orchestrator | 2025-06-05 19:54:30.768385 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-05 19:54:30.768395 | orchestrator | Thursday 05 June 2025 19:53:43 +0000 (0:00:02.774) 0:00:13.761 ********* 2025-06-05 19:54:30.768406 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:54:30.768417 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-06-05 19:54:30.768428 | orchestrator | 2025-06-05 19:54:30.768439 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-05 19:54:30.768449 | orchestrator | Thursday 05 June 2025 19:53:46 +0000 (0:00:03.211) 0:00:16.973 ********* 2025-06-05 19:54:30.768460 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-06-05 19:54:30.768471 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-06-05 19:54:30.768482 | orchestrator | 2025-06-05 19:54:30.768492 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-05 19:54:30.768503 | orchestrator | Thursday 05 June 2025 19:53:51 +0000 (0:00:05.386) 0:00:22.360 ********* 2025-06-05 19:54:30.768514 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-06-05 19:54:30.768525 | orchestrator | 2025-06-05 19:54:30.768535 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:54:30.768546 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768558 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768569 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768580 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768591 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768614 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768625 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:54:30.768637 | orchestrator | 2025-06-05 19:54:30.768648 | orchestrator | 2025-06-05 19:54:30.768666 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:54:30.768678 | orchestrator | Thursday 05 June 2025 19:53:56 +0000 (0:00:04.984) 0:00:27.344 ********* 2025-06-05 19:54:30.768689 | orchestrator | =============================================================================== 2025-06-05 19:54:30.768700 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.44s 2025-06-05 19:54:30.768711 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 5.39s 2025-06-05 19:54:30.768722 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.98s 2025-06-05 19:54:30.768733 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.21s 2025-06-05 19:54:30.768744 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.00s 2025-06-05 19:54:30.768763 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.77s 2025-06-05 19:54:30.768774 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.16s 2025-06-05 19:54:30.768785 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.65s 2025-06-05 19:54:30.768796 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.50s 2025-06-05 19:54:30.768807 | orchestrator | 2025-06-05 19:54:30.768818 | orchestrator | 2025-06-05 19:54:30.768829 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:54:30.768840 | orchestrator | 2025-06-05 19:54:30.768851 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:54:30.768862 | orchestrator | Thursday 05 June 2025 19:52:33 +0000 (0:00:00.257) 0:00:00.257 ********* 2025-06-05 19:54:30.768873 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:54:30.768884 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:54:30.768895 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:54:30.768906 | orchestrator | 2025-06-05 19:54:30.768917 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:54:30.768928 | orchestrator | Thursday 05 June 2025 19:52:33 +0000 (0:00:00.219) 0:00:00.476 ********* 2025-06-05 19:54:30.768939 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-05 19:54:30.768951 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-05 19:54:30.768962 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-05 19:54:30.768973 | orchestrator | 2025-06-05 19:54:30.768984 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-05 19:54:30.768995 | orchestrator | 2025-06-05 19:54:30.769006 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-05 19:54:30.769017 | orchestrator | Thursday 05 June 2025 19:52:33 +0000 (0:00:00.338) 0:00:00.815 ********* 2025-06-05 19:54:30.769028 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:54:30.769039 | orchestrator | 2025-06-05 19:54:30.769050 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-05 19:54:30.769061 | orchestrator | Thursday 05 June 2025 19:52:34 +0000 (0:00:00.424) 0:00:01.239 ********* 2025-06-05 19:54:30.769072 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-05 19:54:30.769083 | orchestrator | 2025-06-05 19:54:30.769094 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-05 19:54:30.769105 | orchestrator | Thursday 05 June 2025 19:52:38 +0000 (0:00:04.088) 0:00:05.327 ********* 2025-06-05 19:54:30.769116 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-05 19:54:30.769127 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-05 19:54:30.769138 | orchestrator | 2025-06-05 19:54:30.769149 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-05 19:54:30.769160 | orchestrator | Thursday 05 June 2025 19:52:45 +0000 (0:00:07.122) 0:00:12.450 ********* 2025-06-05 19:54:30.769171 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:54:30.769182 | orchestrator | 2025-06-05 19:54:30.769193 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-05 19:54:30.769204 | orchestrator | Thursday 05 June 2025 19:52:48 +0000 (0:00:03.491) 0:00:15.942 ********* 2025-06-05 19:54:30.769215 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:54:30.769226 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-05 19:54:30.769237 | orchestrator | 2025-06-05 19:54:30.769248 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-05 19:54:30.769276 | orchestrator | Thursday 05 June 2025 19:52:53 +0000 (0:00:04.175) 0:00:20.118 ********* 2025-06-05 19:54:30.769287 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:54:30.769305 | orchestrator | 2025-06-05 19:54:30.769316 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-05 19:54:30.769327 | orchestrator | Thursday 05 June 2025 19:52:56 +0000 (0:00:03.599) 0:00:23.717 ********* 2025-06-05 19:54:30.769337 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-05 19:54:30.769348 | orchestrator | 2025-06-05 19:54:30.769358 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-05 19:54:30.769369 | orchestrator | Thursday 05 June 2025 19:53:01 +0000 (0:00:04.464) 0:00:28.182 ********* 2025-06-05 19:54:30.769380 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.769390 | orchestrator | 2025-06-05 19:54:30.769401 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-05 19:54:30.769420 | orchestrator | Thursday 05 June 2025 19:53:05 +0000 (0:00:03.777) 0:00:31.960 ********* 2025-06-05 19:54:30.769432 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.769443 | orchestrator | 2025-06-05 19:54:30.769454 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-05 19:54:30.769464 | orchestrator | Thursday 05 June 2025 19:53:09 +0000 (0:00:04.389) 0:00:36.349 ********* 2025-06-05 19:54:30.769480 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.769492 | orchestrator | 2025-06-05 19:54:30.769503 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-05 19:54:30.769513 | orchestrator | Thursday 05 June 2025 19:53:13 +0000 (0:00:04.068) 0:00:40.418 ********* 2025-06-05 19:54:30.769528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769624 | orchestrator | 2025-06-05 19:54:30.769635 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-05 19:54:30.769646 | orchestrator | Thursday 05 June 2025 19:53:15 +0000 (0:00:01.565) 0:00:41.984 ********* 2025-06-05 19:54:30.769657 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.769668 | orchestrator | 2025-06-05 19:54:30.769679 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-05 19:54:30.769690 | orchestrator | Thursday 05 June 2025 19:53:15 +0000 (0:00:00.363) 0:00:42.347 ********* 2025-06-05 19:54:30.769701 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.769711 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:54:30.769722 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:54:30.769733 | orchestrator | 2025-06-05 19:54:30.769744 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-05 19:54:30.769754 | orchestrator | Thursday 05 June 2025 19:53:16 +0000 (0:00:01.367) 0:00:43.715 ********* 2025-06-05 19:54:30.769765 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:54:30.769776 | orchestrator | 2025-06-05 19:54:30.769787 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-05 19:54:30.769797 | orchestrator | Thursday 05 June 2025 19:53:19 +0000 (0:00:02.428) 0:00:46.144 ********* 2025-06-05 19:54:30.769809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.769863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.769904 | orchestrator | 2025-06-05 19:54:30.769915 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-05 19:54:30.769926 | orchestrator | Thursday 05 June 2025 19:53:22 +0000 (0:00:03.411) 0:00:49.555 ********* 2025-06-05 19:54:30.769937 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:54:30.769947 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:54:30.769958 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:54:30.769969 | orchestrator | 2025-06-05 19:54:30.769980 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-05 19:54:30.769991 | orchestrator | Thursday 05 June 2025 19:53:23 +0000 (0:00:00.612) 0:00:50.168 ********* 2025-06-05 19:54:30.770002 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:54:30.770138 | orchestrator | 2025-06-05 19:54:30.770155 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-05 19:54:30.770167 | orchestrator | Thursday 05 June 2025 19:53:24 +0000 (0:00:01.310) 0:00:51.479 ********* 2025-06-05 19:54:30.770194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770238 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770307 | orchestrator | 2025-06-05 19:54:30.770318 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-05 19:54:30.770329 | orchestrator | Thursday 05 June 2025 19:53:27 +0000 (0:00:03.020) 0:00:54.499 ********* 2025-06-05 19:54:30.770342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770380 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.770391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770422 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:54:30.770438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770470 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:54:30.770481 | orchestrator | 2025-06-05 19:54:30.770492 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-05 19:54:30.770503 | orchestrator | Thursday 05 June 2025 19:53:28 +0000 (0:00:00.656) 0:00:55.156 ********* 2025-06-05 19:54:30.770515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770539 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.770563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770596 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:54:30.770607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770631 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:54:30.770642 | orchestrator | 2025-06-05 19:54:30.770653 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-05 19:54:30.770664 | orchestrator | Thursday 05 June 2025 19:53:29 +0000 (0:00:00.913) 0:00:56.069 ********* 2025-06-05 19:54:30.770676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770770 | orchestrator | 2025-06-05 19:54:30.770787 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-05 19:54:30.770799 | orchestrator | Thursday 05 June 2025 19:53:31 +0000 (0:00:02.296) 0:00:58.366 ********* 2025-06-05 19:54:30.770815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.770859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.770914 | orchestrator | 2025-06-05 19:54:30.770925 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-05 19:54:30.770937 | orchestrator | Thursday 05 June 2025 19:53:37 +0000 (0:00:06.404) 0:01:04.770 ********* 2025-06-05 19:54:30.770948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.770960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.770972 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.770984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.771008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.771027 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:54:30.771038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-05 19:54:30.771050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-05 19:54:30.771061 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:54:30.771073 | orchestrator | 2025-06-05 19:54:30.771084 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-05 19:54:30.771095 | orchestrator | Thursday 05 June 2025 19:53:39 +0000 (0:00:01.469) 0:01:06.239 ********* 2025-06-05 19:54:30.771106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.771124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.771148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.771160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.771171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-05 19:54:30.771183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 19:54:30.771194 | orchestrator | 2025-06-05 19:54:30.771206 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-05 19:54:30.771217 | orchestrator | Thursday 05 June 2025 19:53:42 +0000 (0:00:03.173) 0:01:09.413 ********* 2025-06-05 19:54:30.771228 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:54:30.771239 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:54:30.771250 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:54:30.771294 | orchestrator | 2025-06-05 19:54:30.771306 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-05 19:54:30.771327 | orchestrator | Thursday 05 June 2025 19:53:42 +0000 (0:00:00.321) 0:01:09.734 ********* 2025-06-05 19:54:30.771338 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.771349 | orchestrator | 2025-06-05 19:54:30.771360 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-05 19:54:30.771371 | orchestrator | Thursday 05 June 2025 19:53:45 +0000 (0:00:02.240) 0:01:11.975 ********* 2025-06-05 19:54:30.771382 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.771393 | orchestrator | 2025-06-05 19:54:30.771404 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-05 19:54:30.771421 | orchestrator | Thursday 05 June 2025 19:53:47 +0000 (0:00:02.229) 0:01:14.204 ********* 2025-06-05 19:54:30.771432 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.771443 | orchestrator | 2025-06-05 19:54:30.771454 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-05 19:54:30.771470 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:15.917) 0:01:30.122 ********* 2025-06-05 19:54:30.771481 | orchestrator | 2025-06-05 19:54:30.771492 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-05 19:54:30.771503 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:00.076) 0:01:30.198 ********* 2025-06-05 19:54:30.771514 | orchestrator | 2025-06-05 19:54:30.771525 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-05 19:54:30.771536 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:00.062) 0:01:30.261 ********* 2025-06-05 19:54:30.771547 | orchestrator | 2025-06-05 19:54:30.771558 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-05 19:54:30.771569 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:00.063) 0:01:30.325 ********* 2025-06-05 19:54:30.771580 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.771590 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:54:30.771601 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:54:30.771612 | orchestrator | 2025-06-05 19:54:30.771623 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-05 19:54:30.771634 | orchestrator | Thursday 05 June 2025 19:54:19 +0000 (0:00:16.347) 0:01:46.672 ********* 2025-06-05 19:54:30.771645 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:54:30.771655 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:54:30.771666 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:54:30.771677 | orchestrator | 2025-06-05 19:54:30.771688 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:54:30.771699 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-05 19:54:30.771711 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:54:30.771722 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-05 19:54:30.771733 | orchestrator | 2025-06-05 19:54:30.771743 | orchestrator | 2025-06-05 19:54:30.771754 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:54:30.771765 | orchestrator | Thursday 05 June 2025 19:54:29 +0000 (0:00:09.801) 0:01:56.474 ********* 2025-06-05 19:54:30.771776 | orchestrator | =============================================================================== 2025-06-05 19:54:30.771787 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 16.35s 2025-06-05 19:54:30.771798 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.92s 2025-06-05 19:54:30.771809 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.80s 2025-06-05 19:54:30.771819 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.12s 2025-06-05 19:54:30.771830 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.40s 2025-06-05 19:54:30.771845 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.46s 2025-06-05 19:54:30.771856 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.39s 2025-06-05 19:54:30.771867 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.18s 2025-06-05 19:54:30.771878 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.09s 2025-06-05 19:54:30.771889 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.07s 2025-06-05 19:54:30.771900 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.78s 2025-06-05 19:54:30.771910 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.60s 2025-06-05 19:54:30.771921 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.49s 2025-06-05 19:54:30.771932 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.41s 2025-06-05 19:54:30.771943 | orchestrator | magnum : Check magnum containers ---------------------------------------- 3.17s 2025-06-05 19:54:30.771954 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.02s 2025-06-05 19:54:30.771965 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.43s 2025-06-05 19:54:30.771975 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.30s 2025-06-05 19:54:30.771986 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.24s 2025-06-05 19:54:30.771997 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.23s 2025-06-05 19:54:30.772008 | orchestrator | 2025-06-05 19:54:30 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:30.772019 | orchestrator | 2025-06-05 19:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:33.804319 | orchestrator | 2025-06-05 19:54:33 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:33.806870 | orchestrator | 2025-06-05 19:54:33 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:33.807822 | orchestrator | 2025-06-05 19:54:33 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:33.809051 | orchestrator | 2025-06-05 19:54:33 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:33.809096 | orchestrator | 2025-06-05 19:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:36.856689 | orchestrator | 2025-06-05 19:54:36 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:36.856778 | orchestrator | 2025-06-05 19:54:36 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:36.857216 | orchestrator | 2025-06-05 19:54:36 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:36.857793 | orchestrator | 2025-06-05 19:54:36 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:36.857917 | orchestrator | 2025-06-05 19:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:39.880467 | orchestrator | 2025-06-05 19:54:39 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:39.880565 | orchestrator | 2025-06-05 19:54:39 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:39.881485 | orchestrator | 2025-06-05 19:54:39 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:39.881927 | orchestrator | 2025-06-05 19:54:39 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:39.881963 | orchestrator | 2025-06-05 19:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:42.922322 | orchestrator | 2025-06-05 19:54:42 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:42.922414 | orchestrator | 2025-06-05 19:54:42 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:42.924911 | orchestrator | 2025-06-05 19:54:42 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:42.929131 | orchestrator | 2025-06-05 19:54:42 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:42.929177 | orchestrator | 2025-06-05 19:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:45.979840 | orchestrator | 2025-06-05 19:54:45 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:45.981353 | orchestrator | 2025-06-05 19:54:45 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:45.981923 | orchestrator | 2025-06-05 19:54:45 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:45.984149 | orchestrator | 2025-06-05 19:54:45 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:45.985566 | orchestrator | 2025-06-05 19:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:49.034849 | orchestrator | 2025-06-05 19:54:49 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:49.034953 | orchestrator | 2025-06-05 19:54:49 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:49.036484 | orchestrator | 2025-06-05 19:54:49 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:49.038464 | orchestrator | 2025-06-05 19:54:49 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:49.038996 | orchestrator | 2025-06-05 19:54:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:52.072970 | orchestrator | 2025-06-05 19:54:52 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:52.073182 | orchestrator | 2025-06-05 19:54:52 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:52.074272 | orchestrator | 2025-06-05 19:54:52 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:52.075210 | orchestrator | 2025-06-05 19:54:52 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:52.075237 | orchestrator | 2025-06-05 19:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:55.107578 | orchestrator | 2025-06-05 19:54:55 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:55.107672 | orchestrator | 2025-06-05 19:54:55 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:55.109496 | orchestrator | 2025-06-05 19:54:55 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:55.110209 | orchestrator | 2025-06-05 19:54:55 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:55.110235 | orchestrator | 2025-06-05 19:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:54:58.159482 | orchestrator | 2025-06-05 19:54:58 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:54:58.159577 | orchestrator | 2025-06-05 19:54:58 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:54:58.159700 | orchestrator | 2025-06-05 19:54:58 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:54:58.159755 | orchestrator | 2025-06-05 19:54:58 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:54:58.159768 | orchestrator | 2025-06-05 19:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:01.202246 | orchestrator | 2025-06-05 19:55:01 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:01.204603 | orchestrator | 2025-06-05 19:55:01 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:01.204652 | orchestrator | 2025-06-05 19:55:01 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:01.205570 | orchestrator | 2025-06-05 19:55:01 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state STARTED 2025-06-05 19:55:01.205595 | orchestrator | 2025-06-05 19:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:04.239463 | orchestrator | 2025-06-05 19:55:04 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:04.239814 | orchestrator | 2025-06-05 19:55:04 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:04.240776 | orchestrator | 2025-06-05 19:55:04 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:04.241570 | orchestrator | 2025-06-05 19:55:04 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:04.243984 | orchestrator | 2025-06-05 19:55:04.244026 | orchestrator | 2025-06-05 19:55:04 | INFO  | Task 09e0cf60-c51b-4cfb-ace1-0d1d2142ad90 is in state SUCCESS 2025-06-05 19:55:04.246067 | orchestrator | 2025-06-05 19:55:04.246101 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:55:04.246114 | orchestrator | 2025-06-05 19:55:04.246126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:55:04.246138 | orchestrator | Thursday 05 June 2025 19:50:26 +0000 (0:00:00.193) 0:00:00.193 ********* 2025-06-05 19:55:04.246149 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:55:04.246162 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:55:04.246173 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:55:04.246184 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:55:04.246195 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:55:04.246206 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:55:04.246217 | orchestrator | 2025-06-05 19:55:04.246228 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:55:04.246240 | orchestrator | Thursday 05 June 2025 19:50:26 +0000 (0:00:00.528) 0:00:00.722 ********* 2025-06-05 19:55:04.246251 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-05 19:55:04.246262 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-05 19:55:04.246273 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-05 19:55:04.246284 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-05 19:55:04.246295 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-05 19:55:04.246371 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-05 19:55:04.246385 | orchestrator | 2025-06-05 19:55:04.246396 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-05 19:55:04.246408 | orchestrator | 2025-06-05 19:55:04.246420 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-05 19:55:04.246431 | orchestrator | Thursday 05 June 2025 19:50:27 +0000 (0:00:00.497) 0:00:01.220 ********* 2025-06-05 19:55:04.246443 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:55:04.246456 | orchestrator | 2025-06-05 19:55:04.246467 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-05 19:55:04.246478 | orchestrator | Thursday 05 June 2025 19:50:28 +0000 (0:00:01.251) 0:00:02.471 ********* 2025-06-05 19:55:04.246518 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:55:04.246530 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:55:04.246541 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:55:04.246552 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:55:04.246563 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:55:04.246573 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:55:04.246584 | orchestrator | 2025-06-05 19:55:04.246595 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-05 19:55:04.246606 | orchestrator | Thursday 05 June 2025 19:50:29 +0000 (0:00:01.039) 0:00:03.511 ********* 2025-06-05 19:55:04.246617 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:55:04.246629 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:55:04.246641 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:55:04.247145 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:55:04.247162 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:55:04.247393 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:55:04.247411 | orchestrator | 2025-06-05 19:55:04.247442 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-05 19:55:04.247454 | orchestrator | Thursday 05 June 2025 19:50:30 +0000 (0:00:00.978) 0:00:04.490 ********* 2025-06-05 19:55:04.247465 | orchestrator | ok: [testbed-node-0] => { 2025-06-05 19:55:04.247477 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247488 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247499 | orchestrator | } 2025-06-05 19:55:04.247511 | orchestrator | ok: [testbed-node-1] => { 2025-06-05 19:55:04.247535 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247547 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247558 | orchestrator | } 2025-06-05 19:55:04.247569 | orchestrator | ok: [testbed-node-2] => { 2025-06-05 19:55:04.247580 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247591 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247602 | orchestrator | } 2025-06-05 19:55:04.247612 | orchestrator | ok: [testbed-node-3] => { 2025-06-05 19:55:04.247623 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247635 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247645 | orchestrator | } 2025-06-05 19:55:04.247656 | orchestrator | ok: [testbed-node-4] => { 2025-06-05 19:55:04.247667 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247678 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247689 | orchestrator | } 2025-06-05 19:55:04.247699 | orchestrator | ok: [testbed-node-5] => { 2025-06-05 19:55:04.247710 | orchestrator |  "changed": false, 2025-06-05 19:55:04.247721 | orchestrator |  "msg": "All assertions passed" 2025-06-05 19:55:04.247732 | orchestrator | } 2025-06-05 19:55:04.247742 | orchestrator | 2025-06-05 19:55:04.247753 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-05 19:55:04.247764 | orchestrator | Thursday 05 June 2025 19:50:31 +0000 (0:00:00.538) 0:00:05.028 ********* 2025-06-05 19:55:04.247775 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.247786 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.247797 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.247807 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.247818 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.247829 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.247840 | orchestrator | 2025-06-05 19:55:04.247851 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-05 19:55:04.247862 | orchestrator | Thursday 05 June 2025 19:50:31 +0000 (0:00:00.421) 0:00:05.450 ********* 2025-06-05 19:55:04.247873 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-05 19:55:04.247883 | orchestrator | 2025-06-05 19:55:04.247894 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-05 19:55:04.247905 | orchestrator | Thursday 05 June 2025 19:50:35 +0000 (0:00:03.496) 0:00:08.946 ********* 2025-06-05 19:55:04.247916 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-05 19:55:04.247941 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-05 19:55:04.247953 | orchestrator | 2025-06-05 19:55:04.248007 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-05 19:55:04.248021 | orchestrator | Thursday 05 June 2025 19:50:42 +0000 (0:00:07.767) 0:00:16.714 ********* 2025-06-05 19:55:04.248034 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:55:04.248047 | orchestrator | 2025-06-05 19:55:04.248060 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-05 19:55:04.248073 | orchestrator | Thursday 05 June 2025 19:50:46 +0000 (0:00:03.669) 0:00:20.383 ********* 2025-06-05 19:55:04.248085 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:55:04.248097 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-05 19:55:04.248110 | orchestrator | 2025-06-05 19:55:04.248123 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-05 19:55:04.248135 | orchestrator | Thursday 05 June 2025 19:50:51 +0000 (0:00:04.404) 0:00:24.787 ********* 2025-06-05 19:55:04.248148 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:55:04.248160 | orchestrator | 2025-06-05 19:55:04.248173 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-05 19:55:04.248186 | orchestrator | Thursday 05 June 2025 19:50:54 +0000 (0:00:03.797) 0:00:28.585 ********* 2025-06-05 19:55:04.248199 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-05 19:55:04.248211 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-05 19:55:04.248224 | orchestrator | 2025-06-05 19:55:04.248237 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-05 19:55:04.248250 | orchestrator | Thursday 05 June 2025 19:51:03 +0000 (0:00:08.837) 0:00:37.422 ********* 2025-06-05 19:55:04.248263 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.248276 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.248288 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.248300 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.248331 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.248344 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.248356 | orchestrator | 2025-06-05 19:55:04.248369 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-05 19:55:04.248382 | orchestrator | Thursday 05 June 2025 19:51:04 +0000 (0:00:00.748) 0:00:38.170 ********* 2025-06-05 19:55:04.248394 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.248405 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.248416 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.248426 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.248437 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.248448 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.248459 | orchestrator | 2025-06-05 19:55:04.248475 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-05 19:55:04.248487 | orchestrator | Thursday 05 June 2025 19:51:07 +0000 (0:00:02.827) 0:00:40.997 ********* 2025-06-05 19:55:04.248498 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:55:04.248508 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:55:04.248519 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:55:04.248530 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:55:04.248541 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:55:04.248552 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:55:04.248562 | orchestrator | 2025-06-05 19:55:04.248573 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-05 19:55:04.248584 | orchestrator | Thursday 05 June 2025 19:51:08 +0000 (0:00:01.147) 0:00:42.145 ********* 2025-06-05 19:55:04.248595 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.248606 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.248617 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.248635 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.248651 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.248662 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.248673 | orchestrator | 2025-06-05 19:55:04.248684 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-05 19:55:04.248695 | orchestrator | Thursday 05 June 2025 19:51:10 +0000 (0:00:02.043) 0:00:44.188 ********* 2025-06-05 19:55:04.248710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.248762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.248777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.248790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.248814 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.248826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.248837 | orchestrator | 2025-06-05 19:55:04.248848 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-05 19:55:04.248859 | orchestrator | Thursday 05 June 2025 19:51:13 +0000 (0:00:02.615) 0:00:46.804 ********* 2025-06-05 19:55:04.248871 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:04.248882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-05 19:55:04.248893 | orchestrator | due to this access issue: 2025-06-05 19:55:04.248904 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-05 19:55:04.248915 | orchestrator | a directory 2025-06-05 19:55:04.248926 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:55:04.248937 | orchestrator | 2025-06-05 19:55:04.248977 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-05 19:55:04.248990 | orchestrator | Thursday 05 June 2025 19:51:13 +0000 (0:00:00.838) 0:00:47.642 ********* 2025-06-05 19:55:04.249001 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:55:04.249014 | orchestrator | 2025-06-05 19:55:04.249025 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-05 19:55:04.249036 | orchestrator | Thursday 05 June 2025 19:51:14 +0000 (0:00:01.054) 0:00:48.697 ********* 2025-06-05 19:55:04.249047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.249060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.249083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.249095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.249137 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.249151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.249169 | orchestrator | 2025-06-05 19:55:04.249180 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-05 19:55:04.249191 | orchestrator | Thursday 05 June 2025 19:51:18 +0000 (0:00:03.890) 0:00:52.588 ********* 2025-06-05 19:55:04.249208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249219 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249242 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.249282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249295 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.249322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249335 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.249353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249365 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.249388 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249399 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.249410 | orchestrator | 2025-06-05 19:55:04.249421 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-05 19:55:04.249432 | orchestrator | Thursday 05 June 2025 19:51:21 +0000 (0:00:02.573) 0:00:55.161 ********* 2025-06-05 19:55:04.249444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249455 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249512 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.249524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249542 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.249553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249564 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.249581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249592 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.249604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249615 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.249626 | orchestrator | 2025-06-05 19:55:04.249637 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-05 19:55:04.249652 | orchestrator | Thursday 05 June 2025 19:51:24 +0000 (0:00:03.276) 0:00:58.438 ********* 2025-06-05 19:55:04.249664 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.249675 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249686 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.249697 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.249707 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.249718 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.249735 | orchestrator | 2025-06-05 19:55:04.249746 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-05 19:55:04.249757 | orchestrator | Thursday 05 June 2025 19:51:27 +0000 (0:00:02.787) 0:01:01.226 ********* 2025-06-05 19:55:04.249768 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249779 | orchestrator | 2025-06-05 19:55:04.249790 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-05 19:55:04.249801 | orchestrator | Thursday 05 June 2025 19:51:27 +0000 (0:00:00.125) 0:01:01.351 ********* 2025-06-05 19:55:04.249811 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249822 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.249833 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.249844 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.249855 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.249865 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.249876 | orchestrator | 2025-06-05 19:55:04.249887 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-05 19:55:04.249898 | orchestrator | Thursday 05 June 2025 19:51:28 +0000 (0:00:00.551) 0:01:01.902 ********* 2025-06-05 19:55:04.249909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249921 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.249936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.249949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.249960 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.249971 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.249996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.250008 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.250052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250064 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.250075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250087 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250097 | orchestrator | 2025-06-05 19:55:04.250109 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-05 19:55:04.250120 | orchestrator | Thursday 05 June 2025 19:51:30 +0000 (0:00:02.441) 0:01:04.344 ********* 2025-06-05 19:55:04.250136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250174 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250232 | orchestrator | 2025-06-05 19:55:04.250243 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-05 19:55:04.250254 | orchestrator | Thursday 05 June 2025 19:51:34 +0000 (0:00:03.946) 0:01:08.290 ********* 2025-06-05 19:55:04.250270 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250377 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.250389 | orchestrator | 2025-06-05 19:55:04.250401 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-05 19:55:04.250412 | orchestrator | Thursday 05 June 2025 19:51:41 +0000 (0:00:07.085) 0:01:15.376 ********* 2025-06-05 19:55:04.250423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250434 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.250446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250457 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.250473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250490 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250545 | orchestrator | 2025-06-05 19:55:04.250556 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-05 19:55:04.250567 | orchestrator | Thursday 05 June 2025 19:51:46 +0000 (0:00:04.815) 0:01:20.192 ********* 2025-06-05 19:55:04.250578 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.250589 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.250600 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250611 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:04.250622 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:04.250632 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:04.250643 | orchestrator | 2025-06-05 19:55:04.250654 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-05 19:55:04.250665 | orchestrator | Thursday 05 June 2025 19:51:50 +0000 (0:00:04.071) 0:01:24.264 ********* 2025-06-05 19:55:04.250681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250702 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.250714 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250725 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250742 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.250754 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.250765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.250813 | orchestrator | 2025-06-05 19:55:04.250824 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-05 19:55:04.250835 | orchestrator | Thursday 05 June 2025 19:51:54 +0000 (0:00:04.379) 0:01:28.643 ********* 2025-06-05 19:55:04.250846 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.250857 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250867 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.250878 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.250889 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.250900 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.250911 | orchestrator | 2025-06-05 19:55:04.250922 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-05 19:55:04.250933 | orchestrator | Thursday 05 June 2025 19:51:56 +0000 (0:00:02.030) 0:01:30.674 ********* 2025-06-05 19:55:04.250943 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.250954 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.250965 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.250976 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.250987 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.250998 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251009 | orchestrator | 2025-06-05 19:55:04.251019 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-05 19:55:04.251030 | orchestrator | Thursday 05 June 2025 19:51:58 +0000 (0:00:01.831) 0:01:32.505 ********* 2025-06-05 19:55:04.251041 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251052 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251063 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251079 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251091 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251102 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251112 | orchestrator | 2025-06-05 19:55:04.251123 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-05 19:55:04.251134 | orchestrator | Thursday 05 June 2025 19:52:00 +0000 (0:00:01.784) 0:01:34.290 ********* 2025-06-05 19:55:04.251145 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251156 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251167 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251177 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251188 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251199 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251209 | orchestrator | 2025-06-05 19:55:04.251220 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-05 19:55:04.251231 | orchestrator | Thursday 05 June 2025 19:52:02 +0000 (0:00:02.044) 0:01:36.334 ********* 2025-06-05 19:55:04.251242 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251253 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251263 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251274 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251292 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251303 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251331 | orchestrator | 2025-06-05 19:55:04.251342 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-05 19:55:04.251353 | orchestrator | Thursday 05 June 2025 19:52:04 +0000 (0:00:01.742) 0:01:38.076 ********* 2025-06-05 19:55:04.251364 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251375 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251385 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251396 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251407 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251417 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251428 | orchestrator | 2025-06-05 19:55:04.251439 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-05 19:55:04.251450 | orchestrator | Thursday 05 June 2025 19:52:06 +0000 (0:00:01.874) 0:01:39.951 ********* 2025-06-05 19:55:04.251461 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251471 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251482 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251493 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251504 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251515 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251526 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251536 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251547 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251558 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251569 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-05 19:55:04.251580 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251590 | orchestrator | 2025-06-05 19:55:04.251601 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-05 19:55:04.251612 | orchestrator | Thursday 05 June 2025 19:52:07 +0000 (0:00:01.743) 0:01:41.695 ********* 2025-06-05 19:55:04.251628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251641 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251677 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251699 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251722 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251749 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251772 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251783 | orchestrator | 2025-06-05 19:55:04.251794 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-05 19:55:04.251811 | orchestrator | Thursday 05 June 2025 19:52:09 +0000 (0:00:01.785) 0:01:43.480 ********* 2025-06-05 19:55:04.251828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251840 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.251851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251862 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.251878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.251890 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.251901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251912 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.251924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251943 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.251960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.251972 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.251983 | orchestrator | 2025-06-05 19:55:04.251994 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-05 19:55:04.252005 | orchestrator | Thursday 05 June 2025 19:52:11 +0000 (0:00:01.662) 0:01:45.143 ********* 2025-06-05 19:55:04.252016 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252027 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252038 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252048 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252059 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252070 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252081 | orchestrator | 2025-06-05 19:55:04.252092 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-05 19:55:04.252102 | orchestrator | Thursday 05 June 2025 19:52:13 +0000 (0:00:02.413) 0:01:47.556 ********* 2025-06-05 19:55:04.252113 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252124 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252135 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252146 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:55:04.252157 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:55:04.252167 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:55:04.252178 | orchestrator | 2025-06-05 19:55:04.252189 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-05 19:55:04.252200 | orchestrator | Thursday 05 June 2025 19:52:18 +0000 (0:00:04.780) 0:01:52.337 ********* 2025-06-05 19:55:04.252211 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252222 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252232 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252243 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252254 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252265 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252276 | orchestrator | 2025-06-05 19:55:04.252287 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-05 19:55:04.252298 | orchestrator | Thursday 05 June 2025 19:52:20 +0000 (0:00:01.832) 0:01:54.169 ********* 2025-06-05 19:55:04.252361 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252373 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252384 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252395 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252414 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252425 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252435 | orchestrator | 2025-06-05 19:55:04.252447 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-05 19:55:04.252463 | orchestrator | Thursday 05 June 2025 19:52:23 +0000 (0:00:03.118) 0:01:57.288 ********* 2025-06-05 19:55:04.252474 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252485 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252496 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252507 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252518 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252544 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252566 | orchestrator | 2025-06-05 19:55:04.252577 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-05 19:55:04.252588 | orchestrator | Thursday 05 June 2025 19:52:26 +0000 (0:00:02.673) 0:01:59.961 ********* 2025-06-05 19:55:04.252599 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252610 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252621 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252632 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252643 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252654 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252665 | orchestrator | 2025-06-05 19:55:04.252676 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-05 19:55:04.252687 | orchestrator | Thursday 05 June 2025 19:52:27 +0000 (0:00:01.558) 0:02:01.520 ********* 2025-06-05 19:55:04.252698 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252709 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252720 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252731 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252742 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252752 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252763 | orchestrator | 2025-06-05 19:55:04.252774 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-05 19:55:04.252785 | orchestrator | Thursday 05 June 2025 19:52:30 +0000 (0:00:02.998) 0:02:04.518 ********* 2025-06-05 19:55:04.252796 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252807 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252817 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252828 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252839 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252850 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252861 | orchestrator | 2025-06-05 19:55:04.252872 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-05 19:55:04.252882 | orchestrator | Thursday 05 June 2025 19:52:33 +0000 (0:00:02.615) 0:02:07.134 ********* 2025-06-05 19:55:04.252892 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.252907 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252917 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.252927 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.252936 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.252946 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.252955 | orchestrator | 2025-06-05 19:55:04.252965 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-05 19:55:04.252975 | orchestrator | Thursday 05 June 2025 19:52:35 +0000 (0:00:01.942) 0:02:09.076 ********* 2025-06-05 19:55:04.252984 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.252994 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.253003 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.253013 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.253022 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.253032 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.253041 | orchestrator | 2025-06-05 19:55:04.253051 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-05 19:55:04.253067 | orchestrator | Thursday 05 June 2025 19:52:36 +0000 (0:00:01.558) 0:02:10.634 ********* 2025-06-05 19:55:04.253076 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253086 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.253096 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253105 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.253115 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253125 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.253134 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253144 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.253153 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253163 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.253173 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-05 19:55:04.253182 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.253192 | orchestrator | 2025-06-05 19:55:04.253202 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-05 19:55:04.253211 | orchestrator | Thursday 05 June 2025 19:52:39 +0000 (0:00:02.639) 0:02:13.274 ********* 2025-06-05 19:55:04.253227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.253237 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.253247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.253257 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.253275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-05 19:55:04.253291 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.253301 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.253327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.253338 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.253347 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.253366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-05 19:55:04.253376 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.253386 | orchestrator | 2025-06-05 19:55:04.253396 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-05 19:55:04.253405 | orchestrator | Thursday 05 June 2025 19:52:41 +0000 (0:00:02.191) 0:02:15.465 ********* 2025-06-05 19:55:04.253415 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.253439 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.253450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.253461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.253475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-05 19:55:04.253486 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-05 19:55:04.253502 | orchestrator | 2025-06-05 19:55:04.253512 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-05 19:55:04.253526 | orchestrator | Thursday 05 June 2025 19:52:44 +0000 (0:00:02.600) 0:02:18.066 ********* 2025-06-05 19:55:04.253536 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:04.253546 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:04.253555 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:04.253565 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:04.253575 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:04.253585 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:04.253594 | orchestrator | 2025-06-05 19:55:04.253604 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-05 19:55:04.253613 | orchestrator | Thursday 05 June 2025 19:52:45 +0000 (0:00:00.686) 0:02:18.752 ********* 2025-06-05 19:55:04.253623 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:04.253633 | orchestrator | 2025-06-05 19:55:04.253642 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-05 19:55:04.253652 | orchestrator | Thursday 05 June 2025 19:52:47 +0000 (0:00:02.322) 0:02:21.075 ********* 2025-06-05 19:55:04.253662 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:04.253671 | orchestrator | 2025-06-05 19:55:04.253681 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-05 19:55:04.253691 | orchestrator | Thursday 05 June 2025 19:52:49 +0000 (0:00:02.340) 0:02:23.415 ********* 2025-06-05 19:55:04.253700 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:04.253710 | orchestrator | 2025-06-05 19:55:04.253719 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253729 | orchestrator | Thursday 05 June 2025 19:53:32 +0000 (0:00:42.985) 0:03:06.400 ********* 2025-06-05 19:55:04.253738 | orchestrator | 2025-06-05 19:55:04.253748 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253758 | orchestrator | Thursday 05 June 2025 19:53:32 +0000 (0:00:00.091) 0:03:06.492 ********* 2025-06-05 19:55:04.253767 | orchestrator | 2025-06-05 19:55:04.253777 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253786 | orchestrator | Thursday 05 June 2025 19:53:32 +0000 (0:00:00.213) 0:03:06.706 ********* 2025-06-05 19:55:04.253796 | orchestrator | 2025-06-05 19:55:04.253805 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253815 | orchestrator | Thursday 05 June 2025 19:53:33 +0000 (0:00:00.098) 0:03:06.804 ********* 2025-06-05 19:55:04.253824 | orchestrator | 2025-06-05 19:55:04.253834 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253844 | orchestrator | Thursday 05 June 2025 19:53:33 +0000 (0:00:00.132) 0:03:06.936 ********* 2025-06-05 19:55:04.253853 | orchestrator | 2025-06-05 19:55:04.253863 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-05 19:55:04.253872 | orchestrator | Thursday 05 June 2025 19:53:33 +0000 (0:00:00.117) 0:03:07.054 ********* 2025-06-05 19:55:04.253882 | orchestrator | 2025-06-05 19:55:04.253892 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-05 19:55:04.253901 | orchestrator | Thursday 05 June 2025 19:53:33 +0000 (0:00:00.084) 0:03:07.138 ********* 2025-06-05 19:55:04.253916 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:04.253933 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:04.253948 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:04.253963 | orchestrator | 2025-06-05 19:55:04.253978 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-05 19:55:04.253994 | orchestrator | Thursday 05 June 2025 19:54:04 +0000 (0:00:30.922) 0:03:38.060 ********* 2025-06-05 19:55:04.254047 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:55:04.254068 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:55:04.254085 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:55:04.254109 | orchestrator | 2025-06-05 19:55:04.254125 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:55:04.254143 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-05 19:55:04.254160 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-05 19:55:04.254177 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-05 19:55:04.254194 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-05 19:55:04.254210 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-05 19:55:04.254227 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-05 19:55:04.254242 | orchestrator | 2025-06-05 19:55:04.254252 | orchestrator | 2025-06-05 19:55:04.254262 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:55:04.254271 | orchestrator | Thursday 05 June 2025 19:55:02 +0000 (0:00:57.929) 0:04:35.990 ********* 2025-06-05 19:55:04.254281 | orchestrator | =============================================================================== 2025-06-05 19:55:04.254291 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 57.93s 2025-06-05 19:55:04.254300 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.99s 2025-06-05 19:55:04.254366 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.92s 2025-06-05 19:55:04.254377 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.84s 2025-06-05 19:55:04.254396 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.77s 2025-06-05 19:55:04.254406 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.09s 2025-06-05 19:55:04.254416 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.82s 2025-06-05 19:55:04.254426 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.78s 2025-06-05 19:55:04.254435 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.40s 2025-06-05 19:55:04.254450 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.38s 2025-06-05 19:55:04.254466 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.07s 2025-06-05 19:55:04.254480 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.95s 2025-06-05 19:55:04.254495 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.89s 2025-06-05 19:55:04.254511 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.80s 2025-06-05 19:55:04.254526 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.67s 2025-06-05 19:55:04.254542 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.50s 2025-06-05 19:55:04.254557 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.28s 2025-06-05 19:55:04.254572 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.12s 2025-06-05 19:55:04.254587 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.00s 2025-06-05 19:55:04.254602 | orchestrator | Load and persist kernel modules ----------------------------------------- 2.83s 2025-06-05 19:55:04.254630 | orchestrator | 2025-06-05 19:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:07.281875 | orchestrator | 2025-06-05 19:55:07 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:07.282254 | orchestrator | 2025-06-05 19:55:07 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:07.288623 | orchestrator | 2025-06-05 19:55:07 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:07.289192 | orchestrator | 2025-06-05 19:55:07 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:07.289225 | orchestrator | 2025-06-05 19:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:10.332242 | orchestrator | 2025-06-05 19:55:10 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:10.332396 | orchestrator | 2025-06-05 19:55:10 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:10.332549 | orchestrator | 2025-06-05 19:55:10 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:10.336773 | orchestrator | 2025-06-05 19:55:10 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:10.336822 | orchestrator | 2025-06-05 19:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:13.365409 | orchestrator | 2025-06-05 19:55:13 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:13.365621 | orchestrator | 2025-06-05 19:55:13 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:13.366216 | orchestrator | 2025-06-05 19:55:13 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:13.366646 | orchestrator | 2025-06-05 19:55:13 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:13.366672 | orchestrator | 2025-06-05 19:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:16.394764 | orchestrator | 2025-06-05 19:55:16 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:16.394849 | orchestrator | 2025-06-05 19:55:16 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:16.394864 | orchestrator | 2025-06-05 19:55:16 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:16.394876 | orchestrator | 2025-06-05 19:55:16 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:16.394887 | orchestrator | 2025-06-05 19:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:19.422816 | orchestrator | 2025-06-05 19:55:19 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:19.422901 | orchestrator | 2025-06-05 19:55:19 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:19.425304 | orchestrator | 2025-06-05 19:55:19 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:19.425394 | orchestrator | 2025-06-05 19:55:19 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:19.425408 | orchestrator | 2025-06-05 19:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:22.464948 | orchestrator | 2025-06-05 19:55:22 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:22.467329 | orchestrator | 2025-06-05 19:55:22 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:22.467594 | orchestrator | 2025-06-05 19:55:22 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:22.467655 | orchestrator | 2025-06-05 19:55:22 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:22.467669 | orchestrator | 2025-06-05 19:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:25.503102 | orchestrator | 2025-06-05 19:55:25 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:25.504690 | orchestrator | 2025-06-05 19:55:25 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:25.508213 | orchestrator | 2025-06-05 19:55:25 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:25.510087 | orchestrator | 2025-06-05 19:55:25 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:25.510165 | orchestrator | 2025-06-05 19:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:28.539080 | orchestrator | 2025-06-05 19:55:28 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:28.541991 | orchestrator | 2025-06-05 19:55:28 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:28.542937 | orchestrator | 2025-06-05 19:55:28 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:28.543821 | orchestrator | 2025-06-05 19:55:28 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:28.543844 | orchestrator | 2025-06-05 19:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:31.570263 | orchestrator | 2025-06-05 19:55:31 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:31.570855 | orchestrator | 2025-06-05 19:55:31 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:31.572101 | orchestrator | 2025-06-05 19:55:31 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:31.573392 | orchestrator | 2025-06-05 19:55:31 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:31.573599 | orchestrator | 2025-06-05 19:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:34.605790 | orchestrator | 2025-06-05 19:55:34 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:34.605995 | orchestrator | 2025-06-05 19:55:34 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:34.606077 | orchestrator | 2025-06-05 19:55:34 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:34.608329 | orchestrator | 2025-06-05 19:55:34 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:34.608377 | orchestrator | 2025-06-05 19:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:37.644550 | orchestrator | 2025-06-05 19:55:37 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:37.644668 | orchestrator | 2025-06-05 19:55:37 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:37.645783 | orchestrator | 2025-06-05 19:55:37 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:37.647964 | orchestrator | 2025-06-05 19:55:37 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:37.648013 | orchestrator | 2025-06-05 19:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:40.692986 | orchestrator | 2025-06-05 19:55:40 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:40.693090 | orchestrator | 2025-06-05 19:55:40 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:40.693533 | orchestrator | 2025-06-05 19:55:40 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:40.697041 | orchestrator | 2025-06-05 19:55:40 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:40.697069 | orchestrator | 2025-06-05 19:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:43.725356 | orchestrator | 2025-06-05 19:55:43 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:43.726933 | orchestrator | 2025-06-05 19:55:43 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:43.727767 | orchestrator | 2025-06-05 19:55:43 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:43.728432 | orchestrator | 2025-06-05 19:55:43 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:43.728456 | orchestrator | 2025-06-05 19:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:46.763158 | orchestrator | 2025-06-05 19:55:46 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:46.764877 | orchestrator | 2025-06-05 19:55:46 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:46.767537 | orchestrator | 2025-06-05 19:55:46 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:46.769125 | orchestrator | 2025-06-05 19:55:46 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:46.769149 | orchestrator | 2025-06-05 19:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:49.802858 | orchestrator | 2025-06-05 19:55:49 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:49.803287 | orchestrator | 2025-06-05 19:55:49 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:49.803635 | orchestrator | 2025-06-05 19:55:49 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:49.807653 | orchestrator | 2025-06-05 19:55:49 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state STARTED 2025-06-05 19:55:49.807705 | orchestrator | 2025-06-05 19:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:52.843648 | orchestrator | 2025-06-05 19:55:52 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:52.847886 | orchestrator | 2025-06-05 19:55:52 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:52.849219 | orchestrator | 2025-06-05 19:55:52 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:52.853628 | orchestrator | 2025-06-05 19:55:52 | INFO  | Task 566c7740-265e-4c70-b95f-3b97d80f870a is in state SUCCESS 2025-06-05 19:55:52.854230 | orchestrator | 2025-06-05 19:55:52.855534 | orchestrator | 2025-06-05 19:55:52.855570 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:55:52.855578 | orchestrator | 2025-06-05 19:55:52.855585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:55:52.855592 | orchestrator | Thursday 05 June 2025 19:52:49 +0000 (0:00:00.285) 0:00:00.285 ********* 2025-06-05 19:55:52.855599 | orchestrator | ok: [testbed-manager] 2025-06-05 19:55:52.855607 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:55:52.855614 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:55:52.855621 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:55:52.855628 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:55:52.855655 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:55:52.855661 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:55:52.855668 | orchestrator | 2025-06-05 19:55:52.855675 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:55:52.855681 | orchestrator | Thursday 05 June 2025 19:52:50 +0000 (0:00:00.758) 0:00:01.044 ********* 2025-06-05 19:55:52.855688 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855695 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855702 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855708 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855715 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855721 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855728 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-05 19:55:52.855734 | orchestrator | 2025-06-05 19:55:52.855741 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-05 19:55:52.855747 | orchestrator | 2025-06-05 19:55:52.855754 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-05 19:55:52.855760 | orchestrator | Thursday 05 June 2025 19:52:50 +0000 (0:00:00.698) 0:00:01.743 ********* 2025-06-05 19:55:52.855767 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:55:52.855775 | orchestrator | 2025-06-05 19:55:52.855782 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-05 19:55:52.855788 | orchestrator | Thursday 05 June 2025 19:52:52 +0000 (0:00:01.387) 0:00:03.131 ********* 2025-06-05 19:55:52.855798 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:55:52.855809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855817 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855860 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.855868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855875 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855881 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.855888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855906 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:55:52.855914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855928 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.855934 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.855947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.855962 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.855969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856315 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856433 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856454 | orchestrator | 2025-06-05 19:55:52.856461 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-05 19:55:52.856467 | orchestrator | Thursday 05 June 2025 19:52:55 +0000 (0:00:03.080) 0:00:06.211 ********* 2025-06-05 19:55:52.856474 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:55:52.856480 | orchestrator | 2025-06-05 19:55:52.856487 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-05 19:55:52.856493 | orchestrator | Thursday 05 June 2025 19:52:56 +0000 (0:00:01.142) 0:00:07.354 ********* 2025-06-05 19:55:52.856500 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:55:52.856507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856536 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856556 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.856562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856591 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856604 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856636 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:55:52.856647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856654 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856683 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.856707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856713 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.856726 | orchestrator | 2025-06-05 19:55:52.856733 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-05 19:55:52.856739 | orchestrator | Thursday 05 June 2025 19:53:01 +0000 (0:00:05.442) 0:00:12.796 ********* 2025-06-05 19:55:52.856750 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-05 19:55:52.856757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.856763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.856785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.856792 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-05 19:55:52.856799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857015 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857035 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.857051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857110 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.857116 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.857135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857148 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.857155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857166 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857179 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.857186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857205 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857213 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.857219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857242 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.857248 | orchestrator | 2025-06-05 19:55:52.857255 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-05 19:55:52.857261 | orchestrator | Thursday 05 June 2025 19:53:03 +0000 (0:00:01.560) 0:00:14.356 ********* 2025-06-05 19:55:52.857268 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-05 19:55:52.857274 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857281 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-05 19:55:52.857354 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857415 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.857422 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.857432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857471 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.857477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-05 19:55:52.857522 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.857528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857547 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.857554 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857563 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857586 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.857592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-05 19:55:52.857598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-05 19:55:52.857611 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.857945 | orchestrator | 2025-06-05 19:55:52.857953 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-05 19:55:52.857959 | orchestrator | Thursday 05 June 2025 19:53:06 +0000 (0:00:03.473) 0:00:17.830 ********* 2025-06-05 19:55:52.857966 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:55:52.857973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858056 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858069 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858076 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.858083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858158 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:55:52.858169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858202 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858254 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858261 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.858507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.858527 | orchestrator | 2025-06-05 19:55:52.858534 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-05 19:55:52.858540 | orchestrator | Thursday 05 June 2025 19:53:13 +0000 (0:00:06.378) 0:00:24.209 ********* 2025-06-05 19:55:52.858547 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:55:52.858553 | orchestrator | 2025-06-05 19:55:52.858560 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-05 19:55:52.858566 | orchestrator | Thursday 05 June 2025 19:53:14 +0000 (0:00:00.760) 0:00:24.970 ********* 2025-06-05 19:55:52.858573 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858584 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858648 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858658 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.858665 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858672 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858678 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858689 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858696 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072319, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858724 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858732 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858739 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858746 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858752 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858763 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858770 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858794 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858805 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858812 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858819 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1072313, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1511962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.858825 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858837 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858843 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858868 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858878 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858885 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858892 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858899 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858910 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858916 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858940 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1072302, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.858951 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858958 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858964 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858977 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858984 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.858991 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859014 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859025 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859032 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859039 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859049 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859056 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859063 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859086 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1072303, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859097 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859104 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859110 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859121 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859134 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859168 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859176 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859188 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859194 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859201 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859207 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859214 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1072310, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859241 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859249 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859261 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859267 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859275 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859282 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859290 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859318 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859326 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859339 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859346 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859354 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859361 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859368 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1072306, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859413 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859423 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859434 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859442 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859449 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859456 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859464 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859490 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859498 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859505 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859513 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859520 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859527 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859544 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1072309, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859559 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859567 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859581 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859596 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859611 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859623 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859630 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1072314, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859637 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859644 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859650 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859657 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859675 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859682 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.859688 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859695 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859702 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859708 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.859714 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859721 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859727 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859745 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859752 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859758 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.859764 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859771 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.859777 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859784 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1072318, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1531963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859790 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859801 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859807 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.859821 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859828 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-05 19:55:52.859834 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.859841 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1072327, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1561964, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859847 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1072316, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1521962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859854 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072305, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1481962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859860 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1072308, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859871 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1072301, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.147196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859884 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1072312, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1501963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859891 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1072326, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1551962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859898 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1072307, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1491961, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859905 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1072320, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1541963, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-05 19:55:52.859911 | orchestrator | 2025-06-05 19:55:52.859917 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-05 19:55:52.859924 | orchestrator | Thursday 05 June 2025 19:53:37 +0000 (0:00:23.209) 0:00:48.180 ********* 2025-06-05 19:55:52.859930 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:55:52.859936 | orchestrator | 2025-06-05 19:55:52.859943 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-05 19:55:52.859949 | orchestrator | Thursday 05 June 2025 19:53:38 +0000 (0:00:00.726) 0:00:48.906 ********* 2025-06-05 19:55:52.859956 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.859962 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.859972 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.859979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.859985 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.859991 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:55:52.859997 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860010 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860016 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860023 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860029 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:55:52.860035 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860042 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860048 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860054 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860060 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860067 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860079 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860085 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860092 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860098 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860117 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860124 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860133 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860140 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860146 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860159 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860171 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860177 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860189 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-05 19:55:52.860196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-05 19:55:52.860202 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-05 19:55:52.860208 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-05 19:55:52.860214 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-05 19:55:52.860220 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 19:55:52.860227 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-05 19:55:52.860233 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-05 19:55:52.860239 | orchestrator | 2025-06-05 19:55:52.860245 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-05 19:55:52.860251 | orchestrator | Thursday 05 June 2025 19:53:41 +0000 (0:00:03.262) 0:00:52.168 ********* 2025-06-05 19:55:52.860258 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860264 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860275 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860281 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860287 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860293 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860300 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860306 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860312 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860318 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860324 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-05 19:55:52.860330 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860337 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-05 19:55:52.860343 | orchestrator | 2025-06-05 19:55:52.860349 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-05 19:55:52.860355 | orchestrator | Thursday 05 June 2025 19:53:54 +0000 (0:00:13.690) 0:01:05.859 ********* 2025-06-05 19:55:52.860362 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860368 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860374 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860390 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860397 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860403 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860409 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860415 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860421 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860427 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860434 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-05 19:55:52.860440 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860446 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-05 19:55:52.860452 | orchestrator | 2025-06-05 19:55:52.860458 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-05 19:55:52.860464 | orchestrator | Thursday 05 June 2025 19:53:57 +0000 (0:00:02.983) 0:01:08.843 ********* 2025-06-05 19:55:52.860471 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860477 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860483 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860489 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860496 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860505 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-05 19:55:52.860512 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860523 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860530 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860540 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860547 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860553 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-05 19:55:52.860559 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860565 | orchestrator | 2025-06-05 19:55:52.860571 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-05 19:55:52.860578 | orchestrator | Thursday 05 June 2025 19:53:59 +0000 (0:00:01.787) 0:01:10.631 ********* 2025-06-05 19:55:52.860584 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:55:52.860590 | orchestrator | 2025-06-05 19:55:52.860596 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-05 19:55:52.860602 | orchestrator | Thursday 05 June 2025 19:54:00 +0000 (0:00:00.708) 0:01:11.339 ********* 2025-06-05 19:55:52.860608 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.860615 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860621 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860627 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860633 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860639 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860645 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860651 | orchestrator | 2025-06-05 19:55:52.860658 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-05 19:55:52.860664 | orchestrator | Thursday 05 June 2025 19:54:01 +0000 (0:00:00.833) 0:01:12.173 ********* 2025-06-05 19:55:52.860670 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.860676 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860682 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860688 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860694 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.860700 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.860707 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.860713 | orchestrator | 2025-06-05 19:55:52.860719 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-05 19:55:52.860725 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:02.047) 0:01:14.220 ********* 2025-06-05 19:55:52.860731 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860738 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.860744 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860750 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860756 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860762 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860768 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860775 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860781 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860787 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860793 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860799 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860805 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-05 19:55:52.860811 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860818 | orchestrator | 2025-06-05 19:55:52.860824 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-05 19:55:52.860830 | orchestrator | Thursday 05 June 2025 19:54:06 +0000 (0:00:03.101) 0:01:17.322 ********* 2025-06-05 19:55:52.860840 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860846 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.860852 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860859 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.860865 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860871 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.860877 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860883 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.860889 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-05 19:55:52.860896 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860902 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.860908 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-05 19:55:52.860914 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.860920 | orchestrator | 2025-06-05 19:55:52.860929 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-05 19:55:52.860936 | orchestrator | Thursday 05 June 2025 19:54:08 +0000 (0:00:02.435) 0:01:19.757 ********* 2025-06-05 19:55:52.860945 | orchestrator | [WARNING]: Skipped 2025-06-05 19:55:52.860952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-05 19:55:52.860958 | orchestrator | due to this access issue: 2025-06-05 19:55:52.860964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-05 19:55:52.860970 | orchestrator | not a directory 2025-06-05 19:55:52.860976 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-05 19:55:52.860983 | orchestrator | 2025-06-05 19:55:52.860989 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-05 19:55:52.860995 | orchestrator | Thursday 05 June 2025 19:54:09 +0000 (0:00:00.992) 0:01:20.749 ********* 2025-06-05 19:55:52.861001 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.861007 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.861014 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.861020 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.861026 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.861032 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.861038 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.861044 | orchestrator | 2025-06-05 19:55:52.861050 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-05 19:55:52.861057 | orchestrator | Thursday 05 June 2025 19:54:10 +0000 (0:00:00.736) 0:01:21.486 ********* 2025-06-05 19:55:52.861063 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.861069 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:55:52.861075 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:55:52.861081 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:55:52.861087 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:55:52.861093 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:55:52.861099 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:55:52.861105 | orchestrator | 2025-06-05 19:55:52.861112 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-05 19:55:52.861118 | orchestrator | Thursday 05 June 2025 19:54:11 +0000 (0:00:00.682) 0:01:22.169 ********* 2025-06-05 19:55:52.861125 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-05 19:55:52.861136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861143 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861150 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861177 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861183 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-05 19:55:52.861194 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861240 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-05 19:55:52.861252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861284 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861290 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861297 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-05 19:55:52.861340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-05 19:55:52.861364 | orchestrator | 2025-06-05 19:55:52.861370 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-05 19:55:52.861377 | orchestrator | Thursday 05 June 2025 19:54:15 +0000 (0:00:04.461) 0:01:26.630 ********* 2025-06-05 19:55:52.861420 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-05 19:55:52.861427 | orchestrator | skipping: [testbed-manager] 2025-06-05 19:55:52.861433 | orchestrator | 2025-06-05 19:55:52.861439 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861445 | orchestrator | Thursday 05 June 2025 19:54:16 +0000 (0:00:01.028) 0:01:27.659 ********* 2025-06-05 19:55:52.861451 | orchestrator | 2025-06-05 19:55:52.861458 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861464 | orchestrator | Thursday 05 June 2025 19:54:16 +0000 (0:00:00.170) 0:01:27.830 ********* 2025-06-05 19:55:52.861470 | orchestrator | 2025-06-05 19:55:52.861476 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861482 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.062) 0:01:27.892 ********* 2025-06-05 19:55:52.861488 | orchestrator | 2025-06-05 19:55:52.861494 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861501 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.059) 0:01:27.952 ********* 2025-06-05 19:55:52.861507 | orchestrator | 2025-06-05 19:55:52.861513 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861519 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.060) 0:01:28.012 ********* 2025-06-05 19:55:52.861525 | orchestrator | 2025-06-05 19:55:52.861531 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861537 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.057) 0:01:28.070 ********* 2025-06-05 19:55:52.861543 | orchestrator | 2025-06-05 19:55:52.861549 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-05 19:55:52.861555 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.060) 0:01:28.130 ********* 2025-06-05 19:55:52.861561 | orchestrator | 2025-06-05 19:55:52.861567 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-05 19:55:52.861573 | orchestrator | Thursday 05 June 2025 19:54:17 +0000 (0:00:00.081) 0:01:28.212 ********* 2025-06-05 19:55:52.861580 | orchestrator | changed: [testbed-manager] 2025-06-05 19:55:52.861586 | orchestrator | 2025-06-05 19:55:52.861592 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-05 19:55:52.861598 | orchestrator | Thursday 05 June 2025 19:54:35 +0000 (0:00:17.984) 0:01:46.196 ********* 2025-06-05 19:55:52.861604 | orchestrator | changed: [testbed-manager] 2025-06-05 19:55:52.861610 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:55:52.861616 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.861622 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:55:52.861628 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:55:52.861634 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.861640 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.861646 | orchestrator | 2025-06-05 19:55:52.861653 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-05 19:55:52.861659 | orchestrator | Thursday 05 June 2025 19:54:50 +0000 (0:00:14.873) 0:02:01.070 ********* 2025-06-05 19:55:52.861665 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.861671 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.861677 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.861687 | orchestrator | 2025-06-05 19:55:52.861694 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-05 19:55:52.861700 | orchestrator | Thursday 05 June 2025 19:54:55 +0000 (0:00:05.685) 0:02:06.755 ********* 2025-06-05 19:55:52.861706 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.861712 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.861718 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.861724 | orchestrator | 2025-06-05 19:55:52.861745 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-05 19:55:52.861759 | orchestrator | Thursday 05 June 2025 19:55:02 +0000 (0:00:06.159) 0:02:12.915 ********* 2025-06-05 19:55:52.861765 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.861771 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:55:52.861781 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:55:52.861787 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:55:52.861793 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.861800 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.861809 | orchestrator | changed: [testbed-manager] 2025-06-05 19:55:52.861816 | orchestrator | 2025-06-05 19:55:52.861822 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-05 19:55:52.861828 | orchestrator | Thursday 05 June 2025 19:55:15 +0000 (0:00:13.170) 0:02:26.085 ********* 2025-06-05 19:55:52.861834 | orchestrator | changed: [testbed-manager] 2025-06-05 19:55:52.861841 | orchestrator | 2025-06-05 19:55:52.861847 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-05 19:55:52.861853 | orchestrator | Thursday 05 June 2025 19:55:21 +0000 (0:00:06.692) 0:02:32.777 ********* 2025-06-05 19:55:52.861859 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:55:52.861865 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:55:52.861872 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:55:52.861878 | orchestrator | 2025-06-05 19:55:52.861884 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-05 19:55:52.861890 | orchestrator | Thursday 05 June 2025 19:55:32 +0000 (0:00:10.881) 0:02:43.659 ********* 2025-06-05 19:55:52.861896 | orchestrator | changed: [testbed-manager] 2025-06-05 19:55:52.861902 | orchestrator | 2025-06-05 19:55:52.861908 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-05 19:55:52.861915 | orchestrator | Thursday 05 June 2025 19:55:37 +0000 (0:00:04.601) 0:02:48.260 ********* 2025-06-05 19:55:52.861921 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:55:52.861927 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:55:52.861933 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:55:52.861939 | orchestrator | 2025-06-05 19:55:52.861945 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:55:52.861952 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-05 19:55:52.861958 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:55:52.861965 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:55:52.861971 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:55:52.861977 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-05 19:55:52.861984 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-05 19:55:52.861990 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-05 19:55:52.862003 | orchestrator | 2025-06-05 19:55:52.862009 | orchestrator | 2025-06-05 19:55:52.862038 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:55:52.862044 | orchestrator | Thursday 05 June 2025 19:55:49 +0000 (0:00:12.099) 0:03:00.359 ********* 2025-06-05 19:55:52.862050 | orchestrator | =============================================================================== 2025-06-05 19:55:52.862057 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.21s 2025-06-05 19:55:52.862063 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.98s 2025-06-05 19:55:52.862069 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.87s 2025-06-05 19:55:52.862075 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 13.69s 2025-06-05 19:55:52.862081 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.17s 2025-06-05 19:55:52.862087 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.10s 2025-06-05 19:55:52.862093 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.88s 2025-06-05 19:55:52.862099 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.69s 2025-06-05 19:55:52.862106 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.38s 2025-06-05 19:55:52.862112 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 6.16s 2025-06-05 19:55:52.862118 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.69s 2025-06-05 19:55:52.862124 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.44s 2025-06-05 19:55:52.862130 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.60s 2025-06-05 19:55:52.862136 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.46s 2025-06-05 19:55:52.862142 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.47s 2025-06-05 19:55:52.862148 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.26s 2025-06-05 19:55:52.862154 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.10s 2025-06-05 19:55:52.862160 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.08s 2025-06-05 19:55:52.862170 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.98s 2025-06-05 19:55:52.862177 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.44s 2025-06-05 19:55:52.862187 | orchestrator | 2025-06-05 19:55:52 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:55:52.862193 | orchestrator | 2025-06-05 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:55.906073 | orchestrator | 2025-06-05 19:55:55 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:55.906173 | orchestrator | 2025-06-05 19:55:55 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:55.906292 | orchestrator | 2025-06-05 19:55:55 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:55.908519 | orchestrator | 2025-06-05 19:55:55 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:55:55.908621 | orchestrator | 2025-06-05 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:55:58.951021 | orchestrator | 2025-06-05 19:55:58 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:55:58.952961 | orchestrator | 2025-06-05 19:55:58 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:55:58.954902 | orchestrator | 2025-06-05 19:55:58 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:55:58.956754 | orchestrator | 2025-06-05 19:55:58 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:55:58.956783 | orchestrator | 2025-06-05 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:02.002751 | orchestrator | 2025-06-05 19:56:02 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:02.006250 | orchestrator | 2025-06-05 19:56:02 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:02.008130 | orchestrator | 2025-06-05 19:56:02 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:02.010917 | orchestrator | 2025-06-05 19:56:02 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:02.011028 | orchestrator | 2025-06-05 19:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:05.050267 | orchestrator | 2025-06-05 19:56:05 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:05.050566 | orchestrator | 2025-06-05 19:56:05 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:05.051118 | orchestrator | 2025-06-05 19:56:05 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:05.051886 | orchestrator | 2025-06-05 19:56:05 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:05.051903 | orchestrator | 2025-06-05 19:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:08.099038 | orchestrator | 2025-06-05 19:56:08 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:08.099327 | orchestrator | 2025-06-05 19:56:08 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:08.099381 | orchestrator | 2025-06-05 19:56:08 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:08.100366 | orchestrator | 2025-06-05 19:56:08 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:08.100441 | orchestrator | 2025-06-05 19:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:11.145051 | orchestrator | 2025-06-05 19:56:11 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:11.146465 | orchestrator | 2025-06-05 19:56:11 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:11.149443 | orchestrator | 2025-06-05 19:56:11 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:11.151662 | orchestrator | 2025-06-05 19:56:11 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:11.151688 | orchestrator | 2025-06-05 19:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:14.208918 | orchestrator | 2025-06-05 19:56:14 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:14.209314 | orchestrator | 2025-06-05 19:56:14 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:14.210459 | orchestrator | 2025-06-05 19:56:14 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:14.212059 | orchestrator | 2025-06-05 19:56:14 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:14.212120 | orchestrator | 2025-06-05 19:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:17.243485 | orchestrator | 2025-06-05 19:56:17 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:17.244945 | orchestrator | 2025-06-05 19:56:17 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:17.246256 | orchestrator | 2025-06-05 19:56:17 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:17.247755 | orchestrator | 2025-06-05 19:56:17 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:17.247796 | orchestrator | 2025-06-05 19:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:20.278501 | orchestrator | 2025-06-05 19:56:20 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:20.278878 | orchestrator | 2025-06-05 19:56:20 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:20.279631 | orchestrator | 2025-06-05 19:56:20 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:20.280456 | orchestrator | 2025-06-05 19:56:20 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:20.280483 | orchestrator | 2025-06-05 19:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:23.320468 | orchestrator | 2025-06-05 19:56:23 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:23.320715 | orchestrator | 2025-06-05 19:56:23 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:23.321460 | orchestrator | 2025-06-05 19:56:23 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:23.322400 | orchestrator | 2025-06-05 19:56:23 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:23.322570 | orchestrator | 2025-06-05 19:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:26.361562 | orchestrator | 2025-06-05 19:56:26 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:26.363476 | orchestrator | 2025-06-05 19:56:26 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:26.365251 | orchestrator | 2025-06-05 19:56:26 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:26.367162 | orchestrator | 2025-06-05 19:56:26 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:26.367187 | orchestrator | 2025-06-05 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:29.405492 | orchestrator | 2025-06-05 19:56:29 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:29.405734 | orchestrator | 2025-06-05 19:56:29 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:29.406281 | orchestrator | 2025-06-05 19:56:29 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:29.406992 | orchestrator | 2025-06-05 19:56:29 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:29.407021 | orchestrator | 2025-06-05 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:32.444732 | orchestrator | 2025-06-05 19:56:32 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:32.446151 | orchestrator | 2025-06-05 19:56:32 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:32.448010 | orchestrator | 2025-06-05 19:56:32 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:32.449495 | orchestrator | 2025-06-05 19:56:32 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:32.449527 | orchestrator | 2025-06-05 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:35.493156 | orchestrator | 2025-06-05 19:56:35 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:35.494856 | orchestrator | 2025-06-05 19:56:35 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:35.497622 | orchestrator | 2025-06-05 19:56:35 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:35.501591 | orchestrator | 2025-06-05 19:56:35 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:35.501678 | orchestrator | 2025-06-05 19:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:38.541143 | orchestrator | 2025-06-05 19:56:38 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:38.542752 | orchestrator | 2025-06-05 19:56:38 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:38.544681 | orchestrator | 2025-06-05 19:56:38 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state STARTED 2025-06-05 19:56:38.546910 | orchestrator | 2025-06-05 19:56:38 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:38.547026 | orchestrator | 2025-06-05 19:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:41.594417 | orchestrator | 2025-06-05 19:56:41 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:41.594521 | orchestrator | 2025-06-05 19:56:41 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:41.599431 | orchestrator | 2025-06-05 19:56:41 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:41.600130 | orchestrator | 2025-06-05 19:56:41 | INFO  | Task 64464e4c-0b20-45ff-8865-960844e7be1b is in state SUCCESS 2025-06-05 19:56:41.600587 | orchestrator | 2025-06-05 19:56:41.602528 | orchestrator | 2025-06-05 19:56:41.602712 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:56:41.602731 | orchestrator | 2025-06-05 19:56:41.602744 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:56:41.602756 | orchestrator | Thursday 05 June 2025 19:54:01 +0000 (0:00:00.508) 0:00:00.508 ********* 2025-06-05 19:56:41.602767 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:56:41.602781 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:56:41.602793 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:56:41.602804 | orchestrator | 2025-06-05 19:56:41.602815 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:56:41.602826 | orchestrator | Thursday 05 June 2025 19:54:02 +0000 (0:00:00.411) 0:00:00.919 ********* 2025-06-05 19:56:41.602838 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-05 19:56:41.602849 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-05 19:56:41.602860 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-05 19:56:41.602871 | orchestrator | 2025-06-05 19:56:41.602882 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-05 19:56:41.602893 | orchestrator | 2025-06-05 19:56:41.602904 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-05 19:56:41.602915 | orchestrator | Thursday 05 June 2025 19:54:02 +0000 (0:00:00.408) 0:00:01.327 ********* 2025-06-05 19:56:41.602926 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:56:41.602938 | orchestrator | 2025-06-05 19:56:41.602950 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-05 19:56:41.602961 | orchestrator | Thursday 05 June 2025 19:54:03 +0000 (0:00:00.541) 0:00:01.869 ********* 2025-06-05 19:56:41.602972 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-05 19:56:41.602983 | orchestrator | 2025-06-05 19:56:41.602994 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-05 19:56:41.603032 | orchestrator | Thursday 05 June 2025 19:54:07 +0000 (0:00:03.866) 0:00:05.736 ********* 2025-06-05 19:56:41.603044 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-05 19:56:41.603056 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-05 19:56:41.603067 | orchestrator | 2025-06-05 19:56:41.603078 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-05 19:56:41.603089 | orchestrator | Thursday 05 June 2025 19:54:14 +0000 (0:00:07.362) 0:00:13.099 ********* 2025-06-05 19:56:41.603100 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:56:41.603112 | orchestrator | 2025-06-05 19:56:41.603123 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-05 19:56:41.603134 | orchestrator | Thursday 05 June 2025 19:54:18 +0000 (0:00:03.780) 0:00:16.879 ********* 2025-06-05 19:56:41.603145 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:56:41.603156 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-05 19:56:41.603167 | orchestrator | 2025-06-05 19:56:41.603179 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-05 19:56:41.603191 | orchestrator | Thursday 05 June 2025 19:54:22 +0000 (0:00:03.755) 0:00:20.635 ********* 2025-06-05 19:56:41.603201 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:56:41.603213 | orchestrator | 2025-06-05 19:56:41.603224 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-05 19:56:41.603235 | orchestrator | Thursday 05 June 2025 19:54:25 +0000 (0:00:03.156) 0:00:23.791 ********* 2025-06-05 19:56:41.603246 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-05 19:56:41.603257 | orchestrator | 2025-06-05 19:56:41.603270 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-05 19:56:41.603283 | orchestrator | Thursday 05 June 2025 19:54:28 +0000 (0:00:03.470) 0:00:27.262 ********* 2025-06-05 19:56:41.603334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.603354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.603381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.603396 | orchestrator | 2025-06-05 19:56:41.603409 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-05 19:56:41.603422 | orchestrator | Thursday 05 June 2025 19:54:32 +0000 (0:00:03.841) 0:00:31.104 ********* 2025-06-05 19:56:41.603505 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:56:41.603520 | orchestrator | 2025-06-05 19:56:41.603533 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-05 19:56:41.603546 | orchestrator | Thursday 05 June 2025 19:54:33 +0000 (0:00:00.662) 0:00:31.767 ********* 2025-06-05 19:56:41.603558 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.603572 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:56:41.603595 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:56:41.603606 | orchestrator | 2025-06-05 19:56:41.603617 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-05 19:56:41.603628 | orchestrator | Thursday 05 June 2025 19:54:38 +0000 (0:00:05.705) 0:00:37.472 ********* 2025-06-05 19:56:41.603639 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603661 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603672 | orchestrator | 2025-06-05 19:56:41.603683 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-05 19:56:41.603694 | orchestrator | Thursday 05 June 2025 19:54:40 +0000 (0:00:02.023) 0:00:39.496 ********* 2025-06-05 19:56:41.603705 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603716 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603728 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:56:41.603739 | orchestrator | 2025-06-05 19:56:41.603751 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-05 19:56:41.603762 | orchestrator | Thursday 05 June 2025 19:54:42 +0000 (0:00:01.222) 0:00:40.719 ********* 2025-06-05 19:56:41.603773 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:56:41.603784 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:56:41.603795 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:56:41.603806 | orchestrator | 2025-06-05 19:56:41.603817 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-05 19:56:41.603827 | orchestrator | Thursday 05 June 2025 19:54:42 +0000 (0:00:00.739) 0:00:41.458 ********* 2025-06-05 19:56:41.603838 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.603849 | orchestrator | 2025-06-05 19:56:41.603860 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-05 19:56:41.603871 | orchestrator | Thursday 05 June 2025 19:54:43 +0000 (0:00:00.095) 0:00:41.554 ********* 2025-06-05 19:56:41.603882 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.603893 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.603904 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.603915 | orchestrator | 2025-06-05 19:56:41.603926 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-05 19:56:41.603937 | orchestrator | Thursday 05 June 2025 19:54:43 +0000 (0:00:00.193) 0:00:41.748 ********* 2025-06-05 19:56:41.603948 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:56:41.603959 | orchestrator | 2025-06-05 19:56:41.603970 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-05 19:56:41.603981 | orchestrator | Thursday 05 June 2025 19:54:43 +0000 (0:00:00.451) 0:00:42.200 ********* 2025-06-05 19:56:41.604017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604112 | orchestrator | 2025-06-05 19:56:41.604133 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-05 19:56:41.604151 | orchestrator | Thursday 05 June 2025 19:54:47 +0000 (0:00:03.491) 0:00:45.691 ********* 2025-06-05 19:56:41.604184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604204 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604267 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604306 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604318 | orchestrator | 2025-06-05 19:56:41.604329 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-05 19:56:41.604340 | orchestrator | Thursday 05 June 2025 19:54:49 +0000 (0:00:02.597) 0:00:48.289 ********* 2025-06-05 19:56:41.604351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604364 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604406 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-05 19:56:41.604430 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604464 | orchestrator | 2025-06-05 19:56:41.604476 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-05 19:56:41.604487 | orchestrator | Thursday 05 June 2025 19:54:53 +0000 (0:00:03.962) 0:00:52.251 ********* 2025-06-05 19:56:41.604498 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604510 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604521 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604533 | orchestrator | 2025-06-05 19:56:41.604544 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-05 19:56:41.604562 | orchestrator | Thursday 05 June 2025 19:54:56 +0000 (0:00:03.084) 0:00:55.336 ********* 2025-06-05 19:56:41.604592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.604643 | orchestrator | 2025-06-05 19:56:41.604655 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-05 19:56:41.604666 | orchestrator | Thursday 05 June 2025 19:55:00 +0000 (0:00:04.112) 0:00:59.448 ********* 2025-06-05 19:56:41.604678 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.604689 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:56:41.604700 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:56:41.604711 | orchestrator | 2025-06-05 19:56:41.604722 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-05 19:56:41.604739 | orchestrator | Thursday 05 June 2025 19:55:09 +0000 (0:00:08.179) 0:01:07.628 ********* 2025-06-05 19:56:41.604751 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604762 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604773 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604784 | orchestrator | 2025-06-05 19:56:41.604795 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-05 19:56:41.604806 | orchestrator | Thursday 05 June 2025 19:55:13 +0000 (0:00:04.502) 0:01:12.130 ********* 2025-06-05 19:56:41.604817 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604828 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604839 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604849 | orchestrator | 2025-06-05 19:56:41.604860 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-05 19:56:41.604872 | orchestrator | Thursday 05 June 2025 19:55:17 +0000 (0:00:03.520) 0:01:15.651 ********* 2025-06-05 19:56:41.604882 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604893 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604904 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604915 | orchestrator | 2025-06-05 19:56:41.604926 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-05 19:56:41.604937 | orchestrator | Thursday 05 June 2025 19:55:20 +0000 (0:00:03.380) 0:01:19.031 ********* 2025-06-05 19:56:41.604948 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.604959 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.604970 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.604981 | orchestrator | 2025-06-05 19:56:41.604992 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-05 19:56:41.605002 | orchestrator | Thursday 05 June 2025 19:55:25 +0000 (0:00:04.839) 0:01:23.871 ********* 2025-06-05 19:56:41.605014 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.605024 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.605035 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.605046 | orchestrator | 2025-06-05 19:56:41.605064 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-05 19:56:41.605075 | orchestrator | Thursday 05 June 2025 19:55:25 +0000 (0:00:00.285) 0:01:24.157 ********* 2025-06-05 19:56:41.605086 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-05 19:56:41.605097 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.605108 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-05 19:56:41.605119 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.605130 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-05 19:56:41.605141 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.605151 | orchestrator | 2025-06-05 19:56:41.605162 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-05 19:56:41.605173 | orchestrator | Thursday 05 June 2025 19:55:28 +0000 (0:00:03.062) 0:01:27.220 ********* 2025-06-05 19:56:41.605189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.605212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.605235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-05 19:56:41.605248 | orchestrator | 2025-06-05 19:56:41.605259 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-05 19:56:41.605270 | orchestrator | Thursday 05 June 2025 19:55:32 +0000 (0:00:03.661) 0:01:30.881 ********* 2025-06-05 19:56:41.605281 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:56:41.605292 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:56:41.605303 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:56:41.605314 | orchestrator | 2025-06-05 19:56:41.605325 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-05 19:56:41.605336 | orchestrator | Thursday 05 June 2025 19:55:32 +0000 (0:00:00.383) 0:01:31.265 ********* 2025-06-05 19:56:41.605347 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605358 | orchestrator | 2025-06-05 19:56:41.605369 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-05 19:56:41.605380 | orchestrator | Thursday 05 June 2025 19:55:34 +0000 (0:00:02.050) 0:01:33.315 ********* 2025-06-05 19:56:41.605391 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605402 | orchestrator | 2025-06-05 19:56:41.605413 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-05 19:56:41.605424 | orchestrator | Thursday 05 June 2025 19:55:37 +0000 (0:00:02.272) 0:01:35.587 ********* 2025-06-05 19:56:41.605468 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605480 | orchestrator | 2025-06-05 19:56:41.605492 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-05 19:56:41.605509 | orchestrator | Thursday 05 June 2025 19:55:39 +0000 (0:00:02.093) 0:01:37.681 ********* 2025-06-05 19:56:41.605521 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605532 | orchestrator | 2025-06-05 19:56:41.605543 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-05 19:56:41.605561 | orchestrator | Thursday 05 June 2025 19:56:09 +0000 (0:00:30.362) 0:02:08.044 ********* 2025-06-05 19:56:41.605572 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605584 | orchestrator | 2025-06-05 19:56:41.605595 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-05 19:56:41.605606 | orchestrator | Thursday 05 June 2025 19:56:12 +0000 (0:00:02.584) 0:02:10.628 ********* 2025-06-05 19:56:41.605617 | orchestrator | 2025-06-05 19:56:41.605629 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-05 19:56:41.605640 | orchestrator | Thursday 05 June 2025 19:56:12 +0000 (0:00:00.063) 0:02:10.692 ********* 2025-06-05 19:56:41.605651 | orchestrator | 2025-06-05 19:56:41.605662 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-05 19:56:41.605673 | orchestrator | Thursday 05 June 2025 19:56:12 +0000 (0:00:00.061) 0:02:10.753 ********* 2025-06-05 19:56:41.605684 | orchestrator | 2025-06-05 19:56:41.605695 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-05 19:56:41.605706 | orchestrator | Thursday 05 June 2025 19:56:12 +0000 (0:00:00.063) 0:02:10.816 ********* 2025-06-05 19:56:41.605717 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:56:41.605728 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:56:41.605739 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:56:41.605750 | orchestrator | 2025-06-05 19:56:41.605761 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:56:41.605773 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-05 19:56:41.605786 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:56:41.605797 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:56:41.605808 | orchestrator | 2025-06-05 19:56:41.605819 | orchestrator | 2025-06-05 19:56:41.605831 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:56:41.605842 | orchestrator | Thursday 05 June 2025 19:56:39 +0000 (0:00:27.444) 0:02:38.261 ********* 2025-06-05 19:56:41.605853 | orchestrator | =============================================================================== 2025-06-05 19:56:41.605864 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.36s 2025-06-05 19:56:41.605875 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.44s 2025-06-05 19:56:41.605886 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 8.18s 2025-06-05 19:56:41.605898 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.36s 2025-06-05 19:56:41.605909 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.71s 2025-06-05 19:56:41.605920 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.84s 2025-06-05 19:56:41.605932 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.50s 2025-06-05 19:56:41.605943 | orchestrator | glance : Copying over config.json files for services -------------------- 4.11s 2025-06-05 19:56:41.605954 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.96s 2025-06-05 19:56:41.605965 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.87s 2025-06-05 19:56:41.605977 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.84s 2025-06-05 19:56:41.605988 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.78s 2025-06-05 19:56:41.605999 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.76s 2025-06-05 19:56:41.606010 | orchestrator | glance : Check glance containers ---------------------------------------- 3.66s 2025-06-05 19:56:41.606072 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.52s 2025-06-05 19:56:41.606091 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.49s 2025-06-05 19:56:41.606102 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.47s 2025-06-05 19:56:41.606113 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.38s 2025-06-05 19:56:41.606124 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.16s 2025-06-05 19:56:41.606135 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.08s 2025-06-05 19:56:41.606146 | orchestrator | 2025-06-05 19:56:41 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:41.606158 | orchestrator | 2025-06-05 19:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:44.639360 | orchestrator | 2025-06-05 19:56:44 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:44.639681 | orchestrator | 2025-06-05 19:56:44 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:44.640968 | orchestrator | 2025-06-05 19:56:44 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:44.642163 | orchestrator | 2025-06-05 19:56:44 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:44.642219 | orchestrator | 2025-06-05 19:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:47.685080 | orchestrator | 2025-06-05 19:56:47 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:47.687851 | orchestrator | 2025-06-05 19:56:47 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:47.690215 | orchestrator | 2025-06-05 19:56:47 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:47.691904 | orchestrator | 2025-06-05 19:56:47 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:47.692526 | orchestrator | 2025-06-05 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:50.730411 | orchestrator | 2025-06-05 19:56:50 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:50.730577 | orchestrator | 2025-06-05 19:56:50 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:50.731717 | orchestrator | 2025-06-05 19:56:50 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:50.736044 | orchestrator | 2025-06-05 19:56:50 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:50.736876 | orchestrator | 2025-06-05 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:53.787232 | orchestrator | 2025-06-05 19:56:53 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:53.788535 | orchestrator | 2025-06-05 19:56:53 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:53.790156 | orchestrator | 2025-06-05 19:56:53 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:53.791710 | orchestrator | 2025-06-05 19:56:53 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:53.791833 | orchestrator | 2025-06-05 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:56.835911 | orchestrator | 2025-06-05 19:56:56 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:56.838295 | orchestrator | 2025-06-05 19:56:56 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:56.839826 | orchestrator | 2025-06-05 19:56:56 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:56.842118 | orchestrator | 2025-06-05 19:56:56 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:56.842408 | orchestrator | 2025-06-05 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:56:59.887913 | orchestrator | 2025-06-05 19:56:59 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:56:59.890700 | orchestrator | 2025-06-05 19:56:59 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:56:59.893081 | orchestrator | 2025-06-05 19:56:59 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:56:59.894755 | orchestrator | 2025-06-05 19:56:59 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:56:59.894850 | orchestrator | 2025-06-05 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:02.942010 | orchestrator | 2025-06-05 19:57:02 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:02.944252 | orchestrator | 2025-06-05 19:57:02 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:02.946810 | orchestrator | 2025-06-05 19:57:02 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:02.949412 | orchestrator | 2025-06-05 19:57:02 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:02.949435 | orchestrator | 2025-06-05 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:05.997655 | orchestrator | 2025-06-05 19:57:05 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:05.998650 | orchestrator | 2025-06-05 19:57:05 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:05.999967 | orchestrator | 2025-06-05 19:57:05 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:06.009617 | orchestrator | 2025-06-05 19:57:06 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:06.009685 | orchestrator | 2025-06-05 19:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:09.045978 | orchestrator | 2025-06-05 19:57:09 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:09.047381 | orchestrator | 2025-06-05 19:57:09 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:09.049098 | orchestrator | 2025-06-05 19:57:09 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:09.051735 | orchestrator | 2025-06-05 19:57:09 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:09.051756 | orchestrator | 2025-06-05 19:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:12.097083 | orchestrator | 2025-06-05 19:57:12 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:12.098144 | orchestrator | 2025-06-05 19:57:12 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:12.099943 | orchestrator | 2025-06-05 19:57:12 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:12.101353 | orchestrator | 2025-06-05 19:57:12 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:12.101408 | orchestrator | 2025-06-05 19:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:15.143251 | orchestrator | 2025-06-05 19:57:15 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:15.143722 | orchestrator | 2025-06-05 19:57:15 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:15.144874 | orchestrator | 2025-06-05 19:57:15 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:15.145326 | orchestrator | 2025-06-05 19:57:15 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:15.145359 | orchestrator | 2025-06-05 19:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:18.187773 | orchestrator | 2025-06-05 19:57:18 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:18.189936 | orchestrator | 2025-06-05 19:57:18 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:18.191869 | orchestrator | 2025-06-05 19:57:18 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:18.194529 | orchestrator | 2025-06-05 19:57:18 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:18.194557 | orchestrator | 2025-06-05 19:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:21.241233 | orchestrator | 2025-06-05 19:57:21 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:21.242621 | orchestrator | 2025-06-05 19:57:21 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:21.244700 | orchestrator | 2025-06-05 19:57:21 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:21.247220 | orchestrator | 2025-06-05 19:57:21 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:21.247262 | orchestrator | 2025-06-05 19:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:24.294344 | orchestrator | 2025-06-05 19:57:24 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:24.296274 | orchestrator | 2025-06-05 19:57:24 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:24.298579 | orchestrator | 2025-06-05 19:57:24 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:24.300239 | orchestrator | 2025-06-05 19:57:24 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:24.300328 | orchestrator | 2025-06-05 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:27.347003 | orchestrator | 2025-06-05 19:57:27 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:27.348382 | orchestrator | 2025-06-05 19:57:27 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:27.350057 | orchestrator | 2025-06-05 19:57:27 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:27.351413 | orchestrator | 2025-06-05 19:57:27 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:27.352782 | orchestrator | 2025-06-05 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:30.401938 | orchestrator | 2025-06-05 19:57:30 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:30.403454 | orchestrator | 2025-06-05 19:57:30 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state STARTED 2025-06-05 19:57:30.405971 | orchestrator | 2025-06-05 19:57:30 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:30.407349 | orchestrator | 2025-06-05 19:57:30 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:30.407380 | orchestrator | 2025-06-05 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:33.448432 | orchestrator | 2025-06-05 19:57:33 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:33.454149 | orchestrator | 2025-06-05 19:57:33 | INFO  | Task b990265e-428e-435f-bc47-6a22d00f383e is in state SUCCESS 2025-06-05 19:57:33.457132 | orchestrator | 2025-06-05 19:57:33.457200 | orchestrator | 2025-06-05 19:57:33.457209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:57:33.457216 | orchestrator | 2025-06-05 19:57:33.457222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:57:33.457228 | orchestrator | Thursday 05 June 2025 19:54:35 +0000 (0:00:00.341) 0:00:00.341 ********* 2025-06-05 19:57:33.457234 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:57:33.457241 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:57:33.457247 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:57:33.457253 | orchestrator | ok: [testbed-node-3] 2025-06-05 19:57:33.457259 | orchestrator | ok: [testbed-node-4] 2025-06-05 19:57:33.457264 | orchestrator | ok: [testbed-node-5] 2025-06-05 19:57:33.457270 | orchestrator | 2025-06-05 19:57:33.457276 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:57:33.457282 | orchestrator | Thursday 05 June 2025 19:54:36 +0000 (0:00:01.321) 0:00:01.663 ********* 2025-06-05 19:57:33.457287 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-05 19:57:33.457293 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-05 19:57:33.457299 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-05 19:57:33.457304 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-05 19:57:33.457310 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-05 19:57:33.457315 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-05 19:57:33.457321 | orchestrator | 2025-06-05 19:57:33.457327 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-05 19:57:33.457332 | orchestrator | 2025-06-05 19:57:33.457338 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-05 19:57:33.457343 | orchestrator | Thursday 05 June 2025 19:54:37 +0000 (0:00:01.457) 0:00:03.120 ********* 2025-06-05 19:57:33.457349 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:57:33.457356 | orchestrator | 2025-06-05 19:57:33.457362 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-05 19:57:33.457368 | orchestrator | Thursday 05 June 2025 19:54:40 +0000 (0:00:02.173) 0:00:05.293 ********* 2025-06-05 19:57:33.457374 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-05 19:57:33.457380 | orchestrator | 2025-06-05 19:57:33.457385 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-05 19:57:33.457391 | orchestrator | Thursday 05 June 2025 19:54:43 +0000 (0:00:03.416) 0:00:08.710 ********* 2025-06-05 19:57:33.457397 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-05 19:57:33.457402 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-05 19:57:33.457408 | orchestrator | 2025-06-05 19:57:33.457413 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-05 19:57:33.457419 | orchestrator | Thursday 05 June 2025 19:54:50 +0000 (0:00:06.783) 0:00:15.493 ********* 2025-06-05 19:57:33.457432 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:57:33.457438 | orchestrator | 2025-06-05 19:57:33.457444 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-05 19:57:33.457449 | orchestrator | Thursday 05 June 2025 19:54:53 +0000 (0:00:03.534) 0:00:19.027 ********* 2025-06-05 19:57:33.457455 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:57:33.457476 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-05 19:57:33.457482 | orchestrator | 2025-06-05 19:57:33.457487 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-05 19:57:33.457493 | orchestrator | Thursday 05 June 2025 19:54:57 +0000 (0:00:04.123) 0:00:23.151 ********* 2025-06-05 19:57:33.457512 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:57:33.457518 | orchestrator | 2025-06-05 19:57:33.457523 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-05 19:57:33.457529 | orchestrator | Thursday 05 June 2025 19:55:01 +0000 (0:00:03.586) 0:00:26.737 ********* 2025-06-05 19:57:33.457534 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-05 19:57:33.457540 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-05 19:57:33.457545 | orchestrator | 2025-06-05 19:57:33.457550 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-05 19:57:33.457556 | orchestrator | Thursday 05 June 2025 19:55:10 +0000 (0:00:08.538) 0:00:35.276 ********* 2025-06-05 19:57:33.457572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.457580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.457586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.457595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457688 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457695 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457714 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457721 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457730 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.457736 | orchestrator | 2025-06-05 19:57:33.457742 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-05 19:57:33.457747 | orchestrator | Thursday 05 June 2025 19:55:12 +0000 (0:00:02.874) 0:00:38.151 ********* 2025-06-05 19:57:33.457753 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.457759 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.457764 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.457769 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.457967 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.457977 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.457982 | orchestrator | 2025-06-05 19:57:33.457988 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-05 19:57:33.457994 | orchestrator | Thursday 05 June 2025 19:55:13 +0000 (0:00:00.677) 0:00:38.828 ********* 2025-06-05 19:57:33.458000 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.458005 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.458011 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.458039 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:57:33.458045 | orchestrator | 2025-06-05 19:57:33.458051 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-05 19:57:33.458057 | orchestrator | Thursday 05 June 2025 19:55:14 +0000 (0:00:01.234) 0:00:40.062 ********* 2025-06-05 19:57:33.458062 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-05 19:57:33.458074 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-05 19:57:33.458080 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-05 19:57:33.458086 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-05 19:57:33.458092 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-05 19:57:33.458097 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-05 19:57:33.458103 | orchestrator | 2025-06-05 19:57:33.458109 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-05 19:57:33.458115 | orchestrator | Thursday 05 June 2025 19:55:17 +0000 (0:00:02.198) 0:00:42.261 ********* 2025-06-05 19:57:33.458125 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458132 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458143 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458149 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458162 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458211 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-05 19:57:33.458221 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458232 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458239 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458249 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458259 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458265 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-05 19:57:33.458271 | orchestrator | 2025-06-05 19:57:33.458277 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-05 19:57:33.458282 | orchestrator | Thursday 05 June 2025 19:55:20 +0000 (0:00:03.518) 0:00:45.780 ********* 2025-06-05 19:57:33.458288 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:57:33.458295 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:57:33.458481 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-05 19:57:33.458488 | orchestrator | 2025-06-05 19:57:33.458494 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-05 19:57:33.458535 | orchestrator | Thursday 05 June 2025 19:55:22 +0000 (0:00:02.364) 0:00:48.144 ********* 2025-06-05 19:57:33.458562 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-05 19:57:33.458569 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-05 19:57:33.458575 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-05 19:57:33.458589 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:57:33.458595 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:57:33.458601 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-05 19:57:33.458607 | orchestrator | 2025-06-05 19:57:33.458613 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-05 19:57:33.458618 | orchestrator | Thursday 05 June 2025 19:55:26 +0000 (0:00:03.306) 0:00:51.451 ********* 2025-06-05 19:57:33.458624 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-05 19:57:33.458630 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-05 19:57:33.458635 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-05 19:57:33.458641 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-05 19:57:33.458647 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-05 19:57:33.458652 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-05 19:57:33.458658 | orchestrator | 2025-06-05 19:57:33.458664 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-05 19:57:33.458669 | orchestrator | Thursday 05 June 2025 19:55:27 +0000 (0:00:01.099) 0:00:52.551 ********* 2025-06-05 19:57:33.458675 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.458681 | orchestrator | 2025-06-05 19:57:33.458686 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-05 19:57:33.458692 | orchestrator | Thursday 05 June 2025 19:55:27 +0000 (0:00:00.084) 0:00:52.635 ********* 2025-06-05 19:57:33.458698 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.458704 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.458709 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.458715 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.458720 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.458726 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.458732 | orchestrator | 2025-06-05 19:57:33.458737 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-05 19:57:33.458743 | orchestrator | Thursday 05 June 2025 19:55:28 +0000 (0:00:00.964) 0:00:53.600 ********* 2025-06-05 19:57:33.458750 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 19:57:33.458757 | orchestrator | 2025-06-05 19:57:33.458762 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-05 19:57:33.458768 | orchestrator | Thursday 05 June 2025 19:55:29 +0000 (0:00:00.956) 0:00:54.556 ********* 2025-06-05 19:57:33.458778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.458796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.458826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.458833 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458842 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458855 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458881 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458902 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458908 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.458918 | orchestrator | 2025-06-05 19:57:33.458924 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-05 19:57:33.458930 | orchestrator | Thursday 05 June 2025 19:55:32 +0000 (0:00:02.768) 0:00:57.325 ********* 2025-06-05 19:57:33.458939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.458945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.458951 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.458957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.458965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.458971 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.458977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.458987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.458993 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459014 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.459023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459055 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.459060 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.459066 | orchestrator | 2025-06-05 19:57:33.459071 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-05 19:57:33.459077 | orchestrator | Thursday 05 June 2025 19:55:33 +0000 (0:00:00.963) 0:00:58.288 ********* 2025-06-05 19:57:33.459082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459098 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.459105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459139 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459145 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.459152 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459178 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.459188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459194 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.459201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459218 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.459224 | orchestrator | 2025-06-05 19:57:33.459230 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-05 19:57:33.459237 | orchestrator | Thursday 05 June 2025 19:55:34 +0000 (0:00:01.328) 0:00:59.617 ********* 2025-06-05 19:57:33.459246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459271 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459284 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459315 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459334 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459341 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459348 | orchestrator | 2025-06-05 19:57:33.459354 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-05 19:57:33.459360 | orchestrator | Thursday 05 June 2025 19:55:37 +0000 (0:00:02.699) 0:01:02.316 ********* 2025-06-05 19:57:33.459367 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-05 19:57:33.459373 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.459380 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-05 19:57:33.459386 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-05 19:57:33.459393 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.459399 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-05 19:57:33.459405 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-05 19:57:33.459411 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.459419 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-05 19:57:33.459425 | orchestrator | 2025-06-05 19:57:33.459430 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-05 19:57:33.459436 | orchestrator | Thursday 05 June 2025 19:55:39 +0000 (0:00:02.540) 0:01:04.857 ********* 2025-06-05 19:57:33.459442 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459491 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459537 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459552 | orchestrator | 2025-06-05 19:57:33.459557 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-05 19:57:33.459563 | orchestrator | Thursday 05 June 2025 19:55:47 +0000 (0:00:07.952) 0:01:12.809 ********* 2025-06-05 19:57:33.459569 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.459574 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459580 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.459585 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:57:33.459591 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:57:33.459596 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:57:33.459601 | orchestrator | 2025-06-05 19:57:33.459607 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-05 19:57:33.459613 | orchestrator | Thursday 05 June 2025 19:55:49 +0000 (0:00:01.861) 0:01:14.671 ********* 2025-06-05 19:57:33.459623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459634 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.459643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459659 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-05 19:57:33.459673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459679 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.459685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459697 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.459709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459721 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.459729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-05 19:57:33.459741 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.459751 | orchestrator | 2025-06-05 19:57:33.459760 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-05 19:57:33.459769 | orchestrator | Thursday 05 June 2025 19:55:50 +0000 (0:00:01.052) 0:01:15.723 ********* 2025-06-05 19:57:33.459778 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.459788 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459796 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.459804 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.459812 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.459820 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.459827 | orchestrator | 2025-06-05 19:57:33.459841 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-05 19:57:33.459850 | orchestrator | Thursday 05 June 2025 19:55:51 +0000 (0:00:00.743) 0:01:16.467 ********* 2025-06-05 19:57:33.459864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459874 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-05 19:57:33.459907 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459947 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459966 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-05 19:57:33.459971 | orchestrator | 2025-06-05 19:57:33.459977 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-05 19:57:33.459983 | orchestrator | Thursday 05 June 2025 19:55:53 +0000 (0:00:02.331) 0:01:18.799 ********* 2025-06-05 19:57:33.459988 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.459994 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:57:33.459999 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:57:33.460004 | orchestrator | skipping: [testbed-node-3] 2025-06-05 19:57:33.460010 | orchestrator | skipping: [testbed-node-4] 2025-06-05 19:57:33.460015 | orchestrator | skipping: [testbed-node-5] 2025-06-05 19:57:33.460021 | orchestrator | 2025-06-05 19:57:33.460026 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-05 19:57:33.460031 | orchestrator | Thursday 05 June 2025 19:55:54 +0000 (0:00:00.706) 0:01:19.505 ********* 2025-06-05 19:57:33.460037 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:57:33.460042 | orchestrator | 2025-06-05 19:57:33.460047 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-05 19:57:33.460053 | orchestrator | Thursday 05 June 2025 19:55:56 +0000 (0:00:02.475) 0:01:21.980 ********* 2025-06-05 19:57:33.460058 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:57:33.460064 | orchestrator | 2025-06-05 19:57:33.460069 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-05 19:57:33.460075 | orchestrator | Thursday 05 June 2025 19:55:59 +0000 (0:00:02.363) 0:01:24.344 ********* 2025-06-05 19:57:33.460080 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:57:33.460086 | orchestrator | 2025-06-05 19:57:33.460091 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460096 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:19.464) 0:01:43.808 ********* 2025-06-05 19:57:33.460102 | orchestrator | 2025-06-05 19:57:33.460107 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460113 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.062) 0:01:43.871 ********* 2025-06-05 19:57:33.460118 | orchestrator | 2025-06-05 19:57:33.460124 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460129 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.064) 0:01:43.936 ********* 2025-06-05 19:57:33.460134 | orchestrator | 2025-06-05 19:57:33.460142 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460151 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.064) 0:01:44.000 ********* 2025-06-05 19:57:33.460157 | orchestrator | 2025-06-05 19:57:33.460162 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460167 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.066) 0:01:44.067 ********* 2025-06-05 19:57:33.460173 | orchestrator | 2025-06-05 19:57:33.460178 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-05 19:57:33.460183 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.063) 0:01:44.131 ********* 2025-06-05 19:57:33.460189 | orchestrator | 2025-06-05 19:57:33.460194 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-05 19:57:33.460200 | orchestrator | Thursday 05 June 2025 19:56:18 +0000 (0:00:00.058) 0:01:44.189 ********* 2025-06-05 19:57:33.460205 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:57:33.460210 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:57:33.460216 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:57:33.460221 | orchestrator | 2025-06-05 19:57:33.460227 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-05 19:57:33.460232 | orchestrator | Thursday 05 June 2025 19:56:41 +0000 (0:00:22.079) 0:02:06.269 ********* 2025-06-05 19:57:33.460237 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:57:33.460243 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:57:33.460248 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:57:33.460253 | orchestrator | 2025-06-05 19:57:33.460259 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-05 19:57:33.460264 | orchestrator | Thursday 05 June 2025 19:56:45 +0000 (0:00:04.929) 0:02:11.198 ********* 2025-06-05 19:57:33.460269 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:57:33.460275 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:57:33.460280 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:57:33.460285 | orchestrator | 2025-06-05 19:57:33.460291 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-05 19:57:33.460296 | orchestrator | Thursday 05 June 2025 19:57:21 +0000 (0:00:36.052) 0:02:47.251 ********* 2025-06-05 19:57:33.460302 | orchestrator | changed: [testbed-node-4] 2025-06-05 19:57:33.460307 | orchestrator | changed: [testbed-node-5] 2025-06-05 19:57:33.460312 | orchestrator | changed: [testbed-node-3] 2025-06-05 19:57:33.460318 | orchestrator | 2025-06-05 19:57:33.460323 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-05 19:57:33.460329 | orchestrator | Thursday 05 June 2025 19:57:32 +0000 (0:00:10.287) 0:02:57.538 ********* 2025-06-05 19:57:33.460334 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:57:33.460339 | orchestrator | 2025-06-05 19:57:33.460345 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:57:33.460353 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-05 19:57:33.460359 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-05 19:57:33.460364 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-05 19:57:33.460370 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-05 19:57:33.460376 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-05 19:57:33.460381 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-05 19:57:33.460387 | orchestrator | 2025-06-05 19:57:33.460392 | orchestrator | 2025-06-05 19:57:33.460401 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:57:33.460406 | orchestrator | Thursday 05 June 2025 19:57:32 +0000 (0:00:00.605) 0:02:58.144 ********* 2025-06-05 19:57:33.460412 | orchestrator | =============================================================================== 2025-06-05 19:57:33.460417 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 36.05s 2025-06-05 19:57:33.460422 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.08s 2025-06-05 19:57:33.460428 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.46s 2025-06-05 19:57:33.460433 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.29s 2025-06-05 19:57:33.460439 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.54s 2025-06-05 19:57:33.460444 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 7.95s 2025-06-05 19:57:33.460449 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.78s 2025-06-05 19:57:33.460455 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 4.93s 2025-06-05 19:57:33.460460 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.12s 2025-06-05 19:57:33.460466 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.59s 2025-06-05 19:57:33.460471 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.53s 2025-06-05 19:57:33.460477 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.52s 2025-06-05 19:57:33.460485 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.42s 2025-06-05 19:57:33.460490 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.31s 2025-06-05 19:57:33.460496 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.87s 2025-06-05 19:57:33.460545 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.77s 2025-06-05 19:57:33.460552 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.70s 2025-06-05 19:57:33.460557 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.54s 2025-06-05 19:57:33.460563 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.48s 2025-06-05 19:57:33.460568 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.36s 2025-06-05 19:57:33.460574 | orchestrator | 2025-06-05 19:57:33 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:33.463525 | orchestrator | 2025-06-05 19:57:33 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:33.463538 | orchestrator | 2025-06-05 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:36.509727 | orchestrator | 2025-06-05 19:57:36 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:36.511362 | orchestrator | 2025-06-05 19:57:36 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:36.513788 | orchestrator | 2025-06-05 19:57:36 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:36.515677 | orchestrator | 2025-06-05 19:57:36 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:36.515987 | orchestrator | 2025-06-05 19:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:39.554255 | orchestrator | 2025-06-05 19:57:39 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:39.556328 | orchestrator | 2025-06-05 19:57:39 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state STARTED 2025-06-05 19:57:39.557860 | orchestrator | 2025-06-05 19:57:39 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:39.559075 | orchestrator | 2025-06-05 19:57:39 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:39.559104 | orchestrator | 2025-06-05 19:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:42.586734 | orchestrator | 2025-06-05 19:57:42 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:42.589739 | orchestrator | 2025-06-05 19:57:42 | INFO  | Task bd6598bc-7369-4ae6-951f-845dc8f79a1b is in state SUCCESS 2025-06-05 19:57:42.590190 | orchestrator | 2025-06-05 19:57:42.590222 | orchestrator | 2025-06-05 19:57:42.590234 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:57:42.590246 | orchestrator | 2025-06-05 19:57:42.590258 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:57:42.590269 | orchestrator | Thursday 05 June 2025 19:56:43 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-06-05 19:57:42.590280 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:57:42.590292 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:57:42.590303 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:57:42.590314 | orchestrator | 2025-06-05 19:57:42.590326 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:57:42.590337 | orchestrator | Thursday 05 June 2025 19:56:44 +0000 (0:00:00.291) 0:00:00.548 ********* 2025-06-05 19:57:42.590348 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-05 19:57:42.590359 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-05 19:57:42.590370 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-05 19:57:42.590381 | orchestrator | 2025-06-05 19:57:42.590392 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-05 19:57:42.590403 | orchestrator | 2025-06-05 19:57:42.590414 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-05 19:57:42.590424 | orchestrator | Thursday 05 June 2025 19:56:44 +0000 (0:00:00.430) 0:00:00.978 ********* 2025-06-05 19:57:42.590435 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:57:42.590447 | orchestrator | 2025-06-05 19:57:42.590458 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-05 19:57:42.590469 | orchestrator | Thursday 05 June 2025 19:56:45 +0000 (0:00:00.517) 0:00:01.496 ********* 2025-06-05 19:57:42.590480 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-05 19:57:42.590491 | orchestrator | 2025-06-05 19:57:42.590502 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-05 19:57:42.590540 | orchestrator | Thursday 05 June 2025 19:56:49 +0000 (0:00:04.085) 0:00:05.582 ********* 2025-06-05 19:57:42.590554 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-05 19:57:42.590565 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-05 19:57:42.590575 | orchestrator | 2025-06-05 19:57:42.590586 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-05 19:57:42.590610 | orchestrator | Thursday 05 June 2025 19:56:56 +0000 (0:00:07.375) 0:00:12.958 ********* 2025-06-05 19:57:42.590622 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 19:57:42.590633 | orchestrator | 2025-06-05 19:57:42.590644 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-05 19:57:42.590654 | orchestrator | Thursday 05 June 2025 19:56:59 +0000 (0:00:03.012) 0:00:15.971 ********* 2025-06-05 19:57:42.590665 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 19:57:42.590676 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-05 19:57:42.590687 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-05 19:57:42.590697 | orchestrator | 2025-06-05 19:57:42.590709 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-05 19:57:42.590744 | orchestrator | Thursday 05 June 2025 19:57:07 +0000 (0:00:08.086) 0:00:24.057 ********* 2025-06-05 19:57:42.590755 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 19:57:42.590766 | orchestrator | 2025-06-05 19:57:42.590776 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-05 19:57:42.590787 | orchestrator | Thursday 05 June 2025 19:57:11 +0000 (0:00:03.422) 0:00:27.479 ********* 2025-06-05 19:57:42.590798 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-05 19:57:42.590811 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-05 19:57:42.590824 | orchestrator | 2025-06-05 19:57:42.590836 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-05 19:57:42.590855 | orchestrator | Thursday 05 June 2025 19:57:19 +0000 (0:00:08.153) 0:00:35.632 ********* 2025-06-05 19:57:42.590875 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-05 19:57:42.590895 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-05 19:57:42.590916 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-05 19:57:42.590936 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-05 19:57:42.590956 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-05 19:57:42.590975 | orchestrator | 2025-06-05 19:57:42.590988 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-05 19:57:42.591001 | orchestrator | Thursday 05 June 2025 19:57:36 +0000 (0:00:17.366) 0:00:52.998 ********* 2025-06-05 19:57:42.591014 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:57:42.591026 | orchestrator | 2025-06-05 19:57:42.591039 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-05 19:57:42.591051 | orchestrator | Thursday 05 June 2025 19:57:37 +0000 (0:00:00.531) 0:00:53.530 ********* 2025-06-05 19:57:42.591064 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-05 19:57:42.591111 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1749153458.6241498-6382-174102398935243/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1749153458.6241498-6382-174102398935243/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1749153458.6241498-6382-174102398935243/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_5gr2cpwc/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_5gr2cpwc/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_5gr2cpwc/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_5gr2cpwc/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-05 19:57:42.591139 | orchestrator | 2025-06-05 19:57:42.591153 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:57:42.591166 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-05 19:57:42.591179 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:57:42.591192 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:57:42.591203 | orchestrator | 2025-06-05 19:57:42.591214 | orchestrator | 2025-06-05 19:57:42.591225 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:57:42.591236 | orchestrator | Thursday 05 June 2025 19:57:40 +0000 (0:00:03.172) 0:00:56.702 ********* 2025-06-05 19:57:42.591253 | orchestrator | =============================================================================== 2025-06-05 19:57:42.591265 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.37s 2025-06-05 19:57:42.591276 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.15s 2025-06-05 19:57:42.591286 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.09s 2025-06-05 19:57:42.591297 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.38s 2025-06-05 19:57:42.591308 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 4.09s 2025-06-05 19:57:42.591319 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.42s 2025-06-05 19:57:42.591330 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.17s 2025-06-05 19:57:42.591341 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.01s 2025-06-05 19:57:42.591352 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.53s 2025-06-05 19:57:42.591363 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.52s 2025-06-05 19:57:42.591374 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-06-05 19:57:42.591385 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-05 19:57:42.591600 | orchestrator | 2025-06-05 19:57:42 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:42.593338 | orchestrator | 2025-06-05 19:57:42 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:42.593547 | orchestrator | 2025-06-05 19:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:45.635575 | orchestrator | 2025-06-05 19:57:45 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:45.637340 | orchestrator | 2025-06-05 19:57:45 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:45.639021 | orchestrator | 2025-06-05 19:57:45 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:45.639461 | orchestrator | 2025-06-05 19:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:48.681457 | orchestrator | 2025-06-05 19:57:48 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:48.683368 | orchestrator | 2025-06-05 19:57:48 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:48.685929 | orchestrator | 2025-06-05 19:57:48 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:48.686001 | orchestrator | 2025-06-05 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:51.728266 | orchestrator | 2025-06-05 19:57:51 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:51.730421 | orchestrator | 2025-06-05 19:57:51 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:51.732435 | orchestrator | 2025-06-05 19:57:51 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:51.732597 | orchestrator | 2025-06-05 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:54.770407 | orchestrator | 2025-06-05 19:57:54 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:54.772502 | orchestrator | 2025-06-05 19:57:54 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:54.774344 | orchestrator | 2025-06-05 19:57:54 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:54.774414 | orchestrator | 2025-06-05 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:57:57.818416 | orchestrator | 2025-06-05 19:57:57 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:57:57.819632 | orchestrator | 2025-06-05 19:57:57 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:57:57.820912 | orchestrator | 2025-06-05 19:57:57 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:57:57.820941 | orchestrator | 2025-06-05 19:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:00.862793 | orchestrator | 2025-06-05 19:58:00 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:00.864873 | orchestrator | 2025-06-05 19:58:00 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:00.866661 | orchestrator | 2025-06-05 19:58:00 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:00.866713 | orchestrator | 2025-06-05 19:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:03.905021 | orchestrator | 2025-06-05 19:58:03 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:03.906220 | orchestrator | 2025-06-05 19:58:03 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:03.907969 | orchestrator | 2025-06-05 19:58:03 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:03.908009 | orchestrator | 2025-06-05 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:06.942129 | orchestrator | 2025-06-05 19:58:06 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:06.944101 | orchestrator | 2025-06-05 19:58:06 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:06.946270 | orchestrator | 2025-06-05 19:58:06 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:06.946313 | orchestrator | 2025-06-05 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:09.997079 | orchestrator | 2025-06-05 19:58:09 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:09.997740 | orchestrator | 2025-06-05 19:58:09 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:09.999205 | orchestrator | 2025-06-05 19:58:09 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:09.999229 | orchestrator | 2025-06-05 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:13.039646 | orchestrator | 2025-06-05 19:58:13 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:13.041010 | orchestrator | 2025-06-05 19:58:13 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:13.042904 | orchestrator | 2025-06-05 19:58:13 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:13.042939 | orchestrator | 2025-06-05 19:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:16.083169 | orchestrator | 2025-06-05 19:58:16 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:16.085758 | orchestrator | 2025-06-05 19:58:16 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:16.091093 | orchestrator | 2025-06-05 19:58:16 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:16.091144 | orchestrator | 2025-06-05 19:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:19.134077 | orchestrator | 2025-06-05 19:58:19 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:19.135787 | orchestrator | 2025-06-05 19:58:19 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:19.137642 | orchestrator | 2025-06-05 19:58:19 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:19.137683 | orchestrator | 2025-06-05 19:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:22.180201 | orchestrator | 2025-06-05 19:58:22 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:22.180315 | orchestrator | 2025-06-05 19:58:22 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:22.181122 | orchestrator | 2025-06-05 19:58:22 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:22.181151 | orchestrator | 2025-06-05 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:25.224893 | orchestrator | 2025-06-05 19:58:25 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:25.225415 | orchestrator | 2025-06-05 19:58:25 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:25.226819 | orchestrator | 2025-06-05 19:58:25 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:25.226958 | orchestrator | 2025-06-05 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:28.273432 | orchestrator | 2025-06-05 19:58:28 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:28.274170 | orchestrator | 2025-06-05 19:58:28 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:28.275714 | orchestrator | 2025-06-05 19:58:28 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:28.276095 | orchestrator | 2025-06-05 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:31.318177 | orchestrator | 2025-06-05 19:58:31 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:31.318387 | orchestrator | 2025-06-05 19:58:31 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:31.319062 | orchestrator | 2025-06-05 19:58:31 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:31.319091 | orchestrator | 2025-06-05 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:34.363925 | orchestrator | 2025-06-05 19:58:34 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:34.365199 | orchestrator | 2025-06-05 19:58:34 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:34.366737 | orchestrator | 2025-06-05 19:58:34 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:34.366849 | orchestrator | 2025-06-05 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:37.415027 | orchestrator | 2025-06-05 19:58:37 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:37.415136 | orchestrator | 2025-06-05 19:58:37 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:37.417097 | orchestrator | 2025-06-05 19:58:37 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:37.417711 | orchestrator | 2025-06-05 19:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:40.457705 | orchestrator | 2025-06-05 19:58:40 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:40.458278 | orchestrator | 2025-06-05 19:58:40 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:40.458935 | orchestrator | 2025-06-05 19:58:40 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:40.458972 | orchestrator | 2025-06-05 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:43.507821 | orchestrator | 2025-06-05 19:58:43 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:43.511407 | orchestrator | 2025-06-05 19:58:43 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:43.513539 | orchestrator | 2025-06-05 19:58:43 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:43.513574 | orchestrator | 2025-06-05 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:46.554182 | orchestrator | 2025-06-05 19:58:46 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:46.556509 | orchestrator | 2025-06-05 19:58:46 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:46.557470 | orchestrator | 2025-06-05 19:58:46 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:46.557970 | orchestrator | 2025-06-05 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:49.594443 | orchestrator | 2025-06-05 19:58:49 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:49.595585 | orchestrator | 2025-06-05 19:58:49 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:49.597812 | orchestrator | 2025-06-05 19:58:49 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:49.597840 | orchestrator | 2025-06-05 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:52.637048 | orchestrator | 2025-06-05 19:58:52 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:52.639175 | orchestrator | 2025-06-05 19:58:52 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:52.641273 | orchestrator | 2025-06-05 19:58:52 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:52.641331 | orchestrator | 2025-06-05 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:55.683968 | orchestrator | 2025-06-05 19:58:55 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:55.685252 | orchestrator | 2025-06-05 19:58:55 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:55.688139 | orchestrator | 2025-06-05 19:58:55 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:55.688418 | orchestrator | 2025-06-05 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:58:58.732120 | orchestrator | 2025-06-05 19:58:58 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:58:58.733881 | orchestrator | 2025-06-05 19:58:58 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:58:58.735624 | orchestrator | 2025-06-05 19:58:58 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:58:58.735717 | orchestrator | 2025-06-05 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:01.771564 | orchestrator | 2025-06-05 19:59:01 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:01.773360 | orchestrator | 2025-06-05 19:59:01 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:01.775121 | orchestrator | 2025-06-05 19:59:01 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:01.775194 | orchestrator | 2025-06-05 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:04.815980 | orchestrator | 2025-06-05 19:59:04 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:04.817681 | orchestrator | 2025-06-05 19:59:04 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:04.819813 | orchestrator | 2025-06-05 19:59:04 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:04.819849 | orchestrator | 2025-06-05 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:07.864920 | orchestrator | 2025-06-05 19:59:07 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:07.865907 | orchestrator | 2025-06-05 19:59:07 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:07.867272 | orchestrator | 2025-06-05 19:59:07 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:07.867303 | orchestrator | 2025-06-05 19:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:10.914816 | orchestrator | 2025-06-05 19:59:10 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:10.919801 | orchestrator | 2025-06-05 19:59:10 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:10.921173 | orchestrator | 2025-06-05 19:59:10 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:10.921967 | orchestrator | 2025-06-05 19:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:13.974164 | orchestrator | 2025-06-05 19:59:13 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:13.976529 | orchestrator | 2025-06-05 19:59:13 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:13.977618 | orchestrator | 2025-06-05 19:59:13 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:13.977644 | orchestrator | 2025-06-05 19:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:17.033513 | orchestrator | 2025-06-05 19:59:17 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:17.036189 | orchestrator | 2025-06-05 19:59:17 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:17.041477 | orchestrator | 2025-06-05 19:59:17 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state STARTED 2025-06-05 19:59:17.041498 | orchestrator | 2025-06-05 19:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:20.092363 | orchestrator | 2025-06-05 19:59:20 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:20.093750 | orchestrator | 2025-06-05 19:59:20 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:20.094554 | orchestrator | 2025-06-05 19:59:20 | INFO  | Task 40706afa-54a5-4c17-bb2e-1467ea5c83b6 is in state SUCCESS 2025-06-05 19:59:20.094584 | orchestrator | 2025-06-05 19:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:23.147284 | orchestrator | 2025-06-05 19:59:23 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:23.147388 | orchestrator | 2025-06-05 19:59:23 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:23.147404 | orchestrator | 2025-06-05 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:26.194304 | orchestrator | 2025-06-05 19:59:26 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:26.194915 | orchestrator | 2025-06-05 19:59:26 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:26.194954 | orchestrator | 2025-06-05 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:29.238420 | orchestrator | 2025-06-05 19:59:29 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:29.239775 | orchestrator | 2025-06-05 19:59:29 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:29.239813 | orchestrator | 2025-06-05 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:32.281975 | orchestrator | 2025-06-05 19:59:32 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:32.284449 | orchestrator | 2025-06-05 19:59:32 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:32.284503 | orchestrator | 2025-06-05 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:35.335780 | orchestrator | 2025-06-05 19:59:35 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:35.338210 | orchestrator | 2025-06-05 19:59:35 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:35.338255 | orchestrator | 2025-06-05 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:38.387409 | orchestrator | 2025-06-05 19:59:38 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:38.388464 | orchestrator | 2025-06-05 19:59:38 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:38.388502 | orchestrator | 2025-06-05 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:41.429875 | orchestrator | 2025-06-05 19:59:41 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:41.431923 | orchestrator | 2025-06-05 19:59:41 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:41.431979 | orchestrator | 2025-06-05 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:44.470001 | orchestrator | 2025-06-05 19:59:44 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:44.470185 | orchestrator | 2025-06-05 19:59:44 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:44.470211 | orchestrator | 2025-06-05 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:47.520821 | orchestrator | 2025-06-05 19:59:47 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:47.522106 | orchestrator | 2025-06-05 19:59:47 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:47.522351 | orchestrator | 2025-06-05 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:50.575350 | orchestrator | 2025-06-05 19:59:50 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:50.576159 | orchestrator | 2025-06-05 19:59:50 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:50.576297 | orchestrator | 2025-06-05 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:53.625377 | orchestrator | 2025-06-05 19:59:53 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:53.626735 | orchestrator | 2025-06-05 19:59:53 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:53.626782 | orchestrator | 2025-06-05 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:56.670124 | orchestrator | 2025-06-05 19:59:56 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state STARTED 2025-06-05 19:59:56.670224 | orchestrator | 2025-06-05 19:59:56 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:56.670239 | orchestrator | 2025-06-05 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 19:59:59.716712 | orchestrator | 2025-06-05 19:59:59 | INFO  | Task f64f2de3-f3c1-415e-bcf1-2fab60523f8f is in state SUCCESS 2025-06-05 19:59:59.718321 | orchestrator | 2025-06-05 19:59:59.718363 | orchestrator | 2025-06-05 19:59:59.718417 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:59:59.718430 | orchestrator | 2025-06-05 19:59:59.718442 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:59:59.718484 | orchestrator | Thursday 05 June 2025 19:55:54 +0000 (0:00:00.177) 0:00:00.177 ********* 2025-06-05 19:59:59.718523 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.718536 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:59:59.718548 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:59:59.718612 | orchestrator | 2025-06-05 19:59:59.718626 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:59:59.718637 | orchestrator | Thursday 05 June 2025 19:55:54 +0000 (0:00:00.296) 0:00:00.474 ********* 2025-06-05 19:59:59.718648 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-05 19:59:59.718659 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-05 19:59:59.718670 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-05 19:59:59.718681 | orchestrator | 2025-06-05 19:59:59.718692 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-05 19:59:59.718703 | orchestrator | 2025-06-05 19:59:59.718751 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-05 19:59:59.718841 | orchestrator | Thursday 05 June 2025 19:55:54 +0000 (0:00:00.570) 0:00:01.044 ********* 2025-06-05 19:59:59.718852 | orchestrator | 2025-06-05 19:59:59.718863 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-05 19:59:59.718874 | orchestrator | 2025-06-05 19:59:59.718885 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-05 19:59:59.718896 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.718906 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:59:59.718917 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:59:59.718930 | orchestrator | 2025-06-05 19:59:59.718943 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:59:59.718957 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:59:59.718971 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:59:59.718984 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 19:59:59.718996 | orchestrator | 2025-06-05 19:59:59.719008 | orchestrator | 2025-06-05 19:59:59.719021 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:59:59.719033 | orchestrator | Thursday 05 June 2025 19:59:17 +0000 (0:03:22.837) 0:03:23.882 ********* 2025-06-05 19:59:59.719045 | orchestrator | =============================================================================== 2025-06-05 19:59:59.719058 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 202.84s 2025-06-05 19:59:59.719071 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-06-05 19:59:59.719083 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-05 19:59:59.719096 | orchestrator | 2025-06-05 19:59:59.719108 | orchestrator | 2025-06-05 19:59:59.719134 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 19:59:59.719147 | orchestrator | 2025-06-05 19:59:59.719160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 19:59:59.719172 | orchestrator | Thursday 05 June 2025 19:57:37 +0000 (0:00:00.249) 0:00:00.249 ********* 2025-06-05 19:59:59.719184 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.719196 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:59:59.719209 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:59:59.719221 | orchestrator | 2025-06-05 19:59:59.719234 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 19:59:59.719246 | orchestrator | Thursday 05 June 2025 19:57:37 +0000 (0:00:00.278) 0:00:00.528 ********* 2025-06-05 19:59:59.719258 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-05 19:59:59.719271 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-05 19:59:59.719282 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-05 19:59:59.719293 | orchestrator | 2025-06-05 19:59:59.719304 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-05 19:59:59.719314 | orchestrator | 2025-06-05 19:59:59.719325 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-05 19:59:59.719346 | orchestrator | Thursday 05 June 2025 19:57:37 +0000 (0:00:00.366) 0:00:00.894 ********* 2025-06-05 19:59:59.719357 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:59:59.719368 | orchestrator | 2025-06-05 19:59:59.719378 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-05 19:59:59.719389 | orchestrator | Thursday 05 June 2025 19:57:38 +0000 (0:00:00.433) 0:00:01.328 ********* 2025-06-05 19:59:59.719403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719457 | orchestrator | 2025-06-05 19:59:59.719468 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-05 19:59:59.719480 | orchestrator | Thursday 05 June 2025 19:57:38 +0000 (0:00:00.627) 0:00:01.956 ********* 2025-06-05 19:59:59.719491 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-05 19:59:59.719503 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-05 19:59:59.719514 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:59:59.719525 | orchestrator | 2025-06-05 19:59:59.719536 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-05 19:59:59.719547 | orchestrator | Thursday 05 June 2025 19:57:39 +0000 (0:00:00.716) 0:00:02.672 ********* 2025-06-05 19:59:59.719558 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 19:59:59.719569 | orchestrator | 2025-06-05 19:59:59.719580 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-05 19:59:59.719591 | orchestrator | Thursday 05 June 2025 19:57:40 +0000 (0:00:00.546) 0:00:03.219 ********* 2025-06-05 19:59:59.719608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719658 | orchestrator | 2025-06-05 19:59:59.719669 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-05 19:59:59.719680 | orchestrator | Thursday 05 June 2025 19:57:41 +0000 (0:00:01.375) 0:00:04.595 ********* 2025-06-05 19:59:59.719692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719715 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.719727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719739 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.719786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719807 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.719834 | orchestrator | 2025-06-05 19:59:59.719846 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-05 19:59:59.719857 | orchestrator | Thursday 05 June 2025 19:57:41 +0000 (0:00:00.278) 0:00:04.874 ********* 2025-06-05 19:59:59.719868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719891 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.719902 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.719921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-05 19:59:59.719933 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.719944 | orchestrator | 2025-06-05 19:59:59.719955 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-05 19:59:59.719966 | orchestrator | Thursday 05 June 2025 19:57:42 +0000 (0:00:00.607) 0:00:05.481 ********* 2025-06-05 19:59:59.719977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.719993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.720012 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.720023 | orchestrator | 2025-06-05 19:59:59.720034 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-05 19:59:59.720045 | orchestrator | Thursday 05 June 2025 19:57:43 +0000 (0:00:01.239) 0:00:06.721 ********* 2025-06-05 19:59:59.720056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.720076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.720088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.720100 | orchestrator | 2025-06-05 19:59:59.720111 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-05 19:59:59.720122 | orchestrator | Thursday 05 June 2025 19:57:44 +0000 (0:00:01.279) 0:00:08.000 ********* 2025-06-05 19:59:59.720133 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.720144 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.720162 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.720173 | orchestrator | 2025-06-05 19:59:59.720184 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-05 19:59:59.720195 | orchestrator | Thursday 05 June 2025 19:57:45 +0000 (0:00:00.367) 0:00:08.367 ********* 2025-06-05 19:59:59.720206 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-05 19:59:59.720217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-05 19:59:59.720228 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-05 19:59:59.720239 | orchestrator | 2025-06-05 19:59:59.720249 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-05 19:59:59.720260 | orchestrator | Thursday 05 June 2025 19:57:46 +0000 (0:00:01.193) 0:00:09.561 ********* 2025-06-05 19:59:59.720272 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-05 19:59:59.720283 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-05 19:59:59.720299 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-05 19:59:59.720311 | orchestrator | 2025-06-05 19:59:59.720322 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-05 19:59:59.720333 | orchestrator | Thursday 05 June 2025 19:57:47 +0000 (0:00:01.228) 0:00:10.789 ********* 2025-06-05 19:59:59.720344 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 19:59:59.720354 | orchestrator | 2025-06-05 19:59:59.720365 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-05 19:59:59.720376 | orchestrator | Thursday 05 June 2025 19:57:48 +0000 (0:00:00.709) 0:00:11.499 ********* 2025-06-05 19:59:59.720387 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-05 19:59:59.720398 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-05 19:59:59.720409 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.720420 | orchestrator | ok: [testbed-node-1] 2025-06-05 19:59:59.720431 | orchestrator | ok: [testbed-node-2] 2025-06-05 19:59:59.720442 | orchestrator | 2025-06-05 19:59:59.720453 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-05 19:59:59.720464 | orchestrator | Thursday 05 June 2025 19:57:49 +0000 (0:00:00.653) 0:00:12.152 ********* 2025-06-05 19:59:59.720475 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.720486 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.720497 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.720508 | orchestrator | 2025-06-05 19:59:59.720518 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-05 19:59:59.720529 | orchestrator | Thursday 05 June 2025 19:57:49 +0000 (0:00:00.476) 0:00:12.628 ********* 2025-06-05 19:59:59.720541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072247, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1171956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072247, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1171956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1072247, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1171956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072235, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072235, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1072235, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072227, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1101956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072227, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1101956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1072227, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1101956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072245, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072245, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1072245, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072221, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072221, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1072221, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072230, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1111956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072230, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1111956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1072230, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1111956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072240, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1141956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.720846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072240, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1141956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1072240, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1141956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072220, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072220, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1072220, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1061954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072211, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1011953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072211, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1011953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1072211, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1011953, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072222, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1071956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072222, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1071956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1072222, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1071956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072214, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072214, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1072214, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072239, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072239, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1072239, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1131957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072223, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1081955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072223, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1081955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1072223, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1081955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072246, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072246, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1072246, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1151958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072219, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072219, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1072219, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072233, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1121955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072233, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1121955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.721603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072212, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1072233, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1121955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072212, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072216, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1072212, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1031954, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072216, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072226, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1091955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1072216, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1051955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072226, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1091955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072276, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.136196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1072226, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1091955, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072276, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.136196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072271, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1072276, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.136196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072271, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072254, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1181958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1072271, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072254, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1181958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072291, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.141196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1072254, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1181958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072291, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.141196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072255, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1191957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1072291, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.141196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072255, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1191957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072289, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1072255, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1191957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072289, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072292, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1421962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1072289, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072292, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1421962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072286, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.137196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1072292, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1421962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072286, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.137196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072288, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1072286, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.137196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072288, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072257, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1072288, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.139196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072257, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072272, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1072257, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072272, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072294, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.144196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1072272, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072294, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.144196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072290, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.140196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722815 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1072294, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.144196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072290, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.140196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072260, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1221957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1072290, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.140196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072260, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1221957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722892 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072259, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1072260, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1221957, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072259, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072266, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.123196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1072259, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1201956, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072266, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.123196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072267, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.722991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1072266, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.123196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072267, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072273, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072273, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072287, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.138196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723059 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1072267, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1281958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072274, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1301959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072287, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.138196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1072273, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.129196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072296, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1451962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072274, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1301959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723139 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1072287, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.138196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072296, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1451962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1072274, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1301959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1072296, 'dev': 102, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1749150711.1451962, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-05 19:59:59.723184 | orchestrator | 2025-06-05 19:59:59.723195 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-05 19:59:59.723206 | orchestrator | Thursday 05 June 2025 19:58:27 +0000 (0:00:37.651) 0:00:50.280 ********* 2025-06-05 19:59:59.723221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.723232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.723247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-05 19:59:59.723258 | orchestrator | 2025-06-05 19:59:59.723267 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-05 19:59:59.723278 | orchestrator | Thursday 05 June 2025 19:58:28 +0000 (0:00:00.939) 0:00:51.219 ********* 2025-06-05 19:59:59.723288 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:59:59.723298 | orchestrator | 2025-06-05 19:59:59.723307 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-05 19:59:59.723317 | orchestrator | Thursday 05 June 2025 19:58:30 +0000 (0:00:02.454) 0:00:53.674 ********* 2025-06-05 19:59:59.723331 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:59:59.723341 | orchestrator | 2025-06-05 19:59:59.723351 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-05 19:59:59.723361 | orchestrator | Thursday 05 June 2025 19:58:32 +0000 (0:00:02.337) 0:00:56.012 ********* 2025-06-05 19:59:59.723370 | orchestrator | 2025-06-05 19:59:59.723380 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-05 19:59:59.723390 | orchestrator | Thursday 05 June 2025 19:58:33 +0000 (0:00:00.254) 0:00:56.266 ********* 2025-06-05 19:59:59.723399 | orchestrator | 2025-06-05 19:59:59.723409 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-05 19:59:59.723419 | orchestrator | Thursday 05 June 2025 19:58:33 +0000 (0:00:00.063) 0:00:56.329 ********* 2025-06-05 19:59:59.723429 | orchestrator | 2025-06-05 19:59:59.723438 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-05 19:59:59.723448 | orchestrator | Thursday 05 June 2025 19:58:33 +0000 (0:00:00.063) 0:00:56.393 ********* 2025-06-05 19:59:59.723458 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.723467 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.723477 | orchestrator | changed: [testbed-node-0] 2025-06-05 19:59:59.723487 | orchestrator | 2025-06-05 19:59:59.723496 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-05 19:59:59.723506 | orchestrator | Thursday 05 June 2025 19:58:40 +0000 (0:00:06.902) 0:01:03.296 ********* 2025-06-05 19:59:59.723516 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.723525 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.723535 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-05 19:59:59.723545 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-05 19:59:59.723555 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-05 19:59:59.723565 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.723575 | orchestrator | 2025-06-05 19:59:59.723584 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-05 19:59:59.723594 | orchestrator | Thursday 05 June 2025 19:59:19 +0000 (0:00:38.860) 0:01:42.156 ********* 2025-06-05 19:59:59.723604 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.723619 | orchestrator | changed: [testbed-node-1] 2025-06-05 19:59:59.723629 | orchestrator | changed: [testbed-node-2] 2025-06-05 19:59:59.723638 | orchestrator | 2025-06-05 19:59:59.723648 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-05 19:59:59.723658 | orchestrator | Thursday 05 June 2025 19:59:52 +0000 (0:00:33.064) 0:02:15.221 ********* 2025-06-05 19:59:59.723667 | orchestrator | ok: [testbed-node-0] 2025-06-05 19:59:59.723677 | orchestrator | 2025-06-05 19:59:59.723692 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-05 19:59:59.723702 | orchestrator | Thursday 05 June 2025 19:59:54 +0000 (0:00:02.428) 0:02:17.649 ********* 2025-06-05 19:59:59.723711 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.723721 | orchestrator | skipping: [testbed-node-1] 2025-06-05 19:59:59.723731 | orchestrator | skipping: [testbed-node-2] 2025-06-05 19:59:59.723740 | orchestrator | 2025-06-05 19:59:59.723750 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-05 19:59:59.723773 | orchestrator | Thursday 05 June 2025 19:59:54 +0000 (0:00:00.291) 0:02:17.941 ********* 2025-06-05 19:59:59.723785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-05 19:59:59.723797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-05 19:59:59.723808 | orchestrator | 2025-06-05 19:59:59.723818 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-05 19:59:59.723827 | orchestrator | Thursday 05 June 2025 19:59:57 +0000 (0:00:02.564) 0:02:20.506 ********* 2025-06-05 19:59:59.723837 | orchestrator | skipping: [testbed-node-0] 2025-06-05 19:59:59.723846 | orchestrator | 2025-06-05 19:59:59.723856 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 19:59:59.723866 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-05 19:59:59.723876 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-05 19:59:59.723886 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-05 19:59:59.723896 | orchestrator | 2025-06-05 19:59:59.723905 | orchestrator | 2025-06-05 19:59:59.723915 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 19:59:59.723925 | orchestrator | Thursday 05 June 2025 19:59:57 +0000 (0:00:00.247) 0:02:20.754 ********* 2025-06-05 19:59:59.723935 | orchestrator | =============================================================================== 2025-06-05 19:59:59.723944 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.86s 2025-06-05 19:59:59.723958 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.65s 2025-06-05 19:59:59.723968 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 33.06s 2025-06-05 19:59:59.723978 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.90s 2025-06-05 19:59:59.723987 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.56s 2025-06-05 19:59:59.723997 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.46s 2025-06-05 19:59:59.724007 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.43s 2025-06-05 19:59:59.724016 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-06-05 19:59:59.724032 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.38s 2025-06-05 19:59:59.724041 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.28s 2025-06-05 19:59:59.724051 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.24s 2025-06-05 19:59:59.724060 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2025-06-05 19:59:59.724070 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.19s 2025-06-05 19:59:59.724079 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.94s 2025-06-05 19:59:59.724089 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.72s 2025-06-05 19:59:59.724099 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2025-06-05 19:59:59.724108 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.65s 2025-06-05 19:59:59.724118 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.63s 2025-06-05 19:59:59.724127 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.61s 2025-06-05 19:59:59.724137 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.55s 2025-06-05 19:59:59.724146 | orchestrator | 2025-06-05 19:59:59 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 19:59:59.724157 | orchestrator | 2025-06-05 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:02.763875 | orchestrator | 2025-06-05 20:00:02 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:02.763980 | orchestrator | 2025-06-05 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:05.810006 | orchestrator | 2025-06-05 20:00:05 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:05.810179 | orchestrator | 2025-06-05 20:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:08.857349 | orchestrator | 2025-06-05 20:00:08 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:08.857454 | orchestrator | 2025-06-05 20:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:11.895289 | orchestrator | 2025-06-05 20:00:11 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:11.895408 | orchestrator | 2025-06-05 20:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:14.925755 | orchestrator | 2025-06-05 20:00:14 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:14.925856 | orchestrator | 2025-06-05 20:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:17.965622 | orchestrator | 2025-06-05 20:00:17 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:17.965724 | orchestrator | 2025-06-05 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:20.996524 | orchestrator | 2025-06-05 20:00:20 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:20.996611 | orchestrator | 2025-06-05 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:24.027127 | orchestrator | 2025-06-05 20:00:24 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:24.027185 | orchestrator | 2025-06-05 20:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:27.061487 | orchestrator | 2025-06-05 20:00:27 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:27.061724 | orchestrator | 2025-06-05 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:30.118881 | orchestrator | 2025-06-05 20:00:30 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:30.118965 | orchestrator | 2025-06-05 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:33.157134 | orchestrator | 2025-06-05 20:00:33 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:33.157222 | orchestrator | 2025-06-05 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:36.221671 | orchestrator | 2025-06-05 20:00:36 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:36.221753 | orchestrator | 2025-06-05 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:39.272047 | orchestrator | 2025-06-05 20:00:39 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:39.272160 | orchestrator | 2025-06-05 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:42.314647 | orchestrator | 2025-06-05 20:00:42 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:42.314749 | orchestrator | 2025-06-05 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:45.363662 | orchestrator | 2025-06-05 20:00:45 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:45.363767 | orchestrator | 2025-06-05 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:48.393706 | orchestrator | 2025-06-05 20:00:48 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:48.393862 | orchestrator | 2025-06-05 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:51.437774 | orchestrator | 2025-06-05 20:00:51 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:51.437927 | orchestrator | 2025-06-05 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:54.482432 | orchestrator | 2025-06-05 20:00:54 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:54.482545 | orchestrator | 2025-06-05 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:00:57.522360 | orchestrator | 2025-06-05 20:00:57 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:00:57.522464 | orchestrator | 2025-06-05 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:00.557175 | orchestrator | 2025-06-05 20:01:00 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:00.557275 | orchestrator | 2025-06-05 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:03.617229 | orchestrator | 2025-06-05 20:01:03 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:03.617318 | orchestrator | 2025-06-05 20:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:06.663123 | orchestrator | 2025-06-05 20:01:06 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:06.663223 | orchestrator | 2025-06-05 20:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:09.709117 | orchestrator | 2025-06-05 20:01:09 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:09.709232 | orchestrator | 2025-06-05 20:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:12.749022 | orchestrator | 2025-06-05 20:01:12 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:12.749122 | orchestrator | 2025-06-05 20:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:15.783031 | orchestrator | 2025-06-05 20:01:15 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:15.783136 | orchestrator | 2025-06-05 20:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:18.832826 | orchestrator | 2025-06-05 20:01:18 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:18.832934 | orchestrator | 2025-06-05 20:01:18 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:21.883708 | orchestrator | 2025-06-05 20:01:21 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:21.883836 | orchestrator | 2025-06-05 20:01:21 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:24.935629 | orchestrator | 2025-06-05 20:01:24 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:24.935733 | orchestrator | 2025-06-05 20:01:24 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:27.985857 | orchestrator | 2025-06-05 20:01:27 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:27.985947 | orchestrator | 2025-06-05 20:01:27 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:31.020498 | orchestrator | 2025-06-05 20:01:31 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:31.020710 | orchestrator | 2025-06-05 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:34.055185 | orchestrator | 2025-06-05 20:01:34 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:34.055281 | orchestrator | 2025-06-05 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:37.102915 | orchestrator | 2025-06-05 20:01:37 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:37.103150 | orchestrator | 2025-06-05 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:40.155743 | orchestrator | 2025-06-05 20:01:40 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:40.155997 | orchestrator | 2025-06-05 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:43.206595 | orchestrator | 2025-06-05 20:01:43 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:43.206719 | orchestrator | 2025-06-05 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:46.246988 | orchestrator | 2025-06-05 20:01:46 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:46.247088 | orchestrator | 2025-06-05 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:49.301119 | orchestrator | 2025-06-05 20:01:49 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:49.301219 | orchestrator | 2025-06-05 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:52.345462 | orchestrator | 2025-06-05 20:01:52 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:52.345550 | orchestrator | 2025-06-05 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:55.387482 | orchestrator | 2025-06-05 20:01:55 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:55.387584 | orchestrator | 2025-06-05 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:01:58.432071 | orchestrator | 2025-06-05 20:01:58 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:01:58.432171 | orchestrator | 2025-06-05 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:01.484967 | orchestrator | 2025-06-05 20:02:01 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:01.485071 | orchestrator | 2025-06-05 20:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:04.528712 | orchestrator | 2025-06-05 20:02:04 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:04.528859 | orchestrator | 2025-06-05 20:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:07.576686 | orchestrator | 2025-06-05 20:02:07 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:07.576772 | orchestrator | 2025-06-05 20:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:10.629576 | orchestrator | 2025-06-05 20:02:10 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:10.629671 | orchestrator | 2025-06-05 20:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:13.665648 | orchestrator | 2025-06-05 20:02:13 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:13.665731 | orchestrator | 2025-06-05 20:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:16.711209 | orchestrator | 2025-06-05 20:02:16 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:16.711312 | orchestrator | 2025-06-05 20:02:16 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:19.762175 | orchestrator | 2025-06-05 20:02:19 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:19.762270 | orchestrator | 2025-06-05 20:02:19 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:22.802135 | orchestrator | 2025-06-05 20:02:22 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:22.802220 | orchestrator | 2025-06-05 20:02:22 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:25.844463 | orchestrator | 2025-06-05 20:02:25 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:25.844550 | orchestrator | 2025-06-05 20:02:25 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:28.894963 | orchestrator | 2025-06-05 20:02:28 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:28.895066 | orchestrator | 2025-06-05 20:02:28 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:31.948827 | orchestrator | 2025-06-05 20:02:31 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:31.948941 | orchestrator | 2025-06-05 20:02:31 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:34.984064 | orchestrator | 2025-06-05 20:02:34 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:34.984168 | orchestrator | 2025-06-05 20:02:34 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:38.024846 | orchestrator | 2025-06-05 20:02:38 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:38.024936 | orchestrator | 2025-06-05 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:41.075249 | orchestrator | 2025-06-05 20:02:41 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:41.075473 | orchestrator | 2025-06-05 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:44.128166 | orchestrator | 2025-06-05 20:02:44 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:44.128269 | orchestrator | 2025-06-05 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:47.173948 | orchestrator | 2025-06-05 20:02:47 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:47.174087 | orchestrator | 2025-06-05 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:50.229129 | orchestrator | 2025-06-05 20:02:50 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:50.229241 | orchestrator | 2025-06-05 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:53.275479 | orchestrator | 2025-06-05 20:02:53 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:53.275578 | orchestrator | 2025-06-05 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:56.321720 | orchestrator | 2025-06-05 20:02:56 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:56.321883 | orchestrator | 2025-06-05 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:02:59.345614 | orchestrator | 2025-06-05 20:02:59 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:02:59.345700 | orchestrator | 2025-06-05 20:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:02.371939 | orchestrator | 2025-06-05 20:03:02 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:02.372035 | orchestrator | 2025-06-05 20:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:05.398299 | orchestrator | 2025-06-05 20:03:05 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:05.398384 | orchestrator | 2025-06-05 20:03:05 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state STARTED 2025-06-05 20:03:05.398399 | orchestrator | 2025-06-05 20:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:08.430141 | orchestrator | 2025-06-05 20:03:08 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:08.431348 | orchestrator | 2025-06-05 20:03:08 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state STARTED 2025-06-05 20:03:08.431559 | orchestrator | 2025-06-05 20:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:11.479732 | orchestrator | 2025-06-05 20:03:11 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:11.480582 | orchestrator | 2025-06-05 20:03:11 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state STARTED 2025-06-05 20:03:11.480720 | orchestrator | 2025-06-05 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:14.524188 | orchestrator | 2025-06-05 20:03:14 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:14.527623 | orchestrator | 2025-06-05 20:03:14 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state STARTED 2025-06-05 20:03:14.527667 | orchestrator | 2025-06-05 20:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:17.586946 | orchestrator | 2025-06-05 20:03:17 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:17.589286 | orchestrator | 2025-06-05 20:03:17 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state STARTED 2025-06-05 20:03:17.589341 | orchestrator | 2025-06-05 20:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:20.650401 | orchestrator | 2025-06-05 20:03:20 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:20.652065 | orchestrator | 2025-06-05 20:03:20 | INFO  | Task 38609167-e9fe-423e-96bc-d888b2475817 is in state SUCCESS 2025-06-05 20:03:20.652209 | orchestrator | 2025-06-05 20:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:23.693035 | orchestrator | 2025-06-05 20:03:23 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:23.693140 | orchestrator | 2025-06-05 20:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:26.740937 | orchestrator | 2025-06-05 20:03:26 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:26.741038 | orchestrator | 2025-06-05 20:03:26 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:29.791634 | orchestrator | 2025-06-05 20:03:29 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:29.791792 | orchestrator | 2025-06-05 20:03:29 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:32.837635 | orchestrator | 2025-06-05 20:03:32 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:32.837772 | orchestrator | 2025-06-05 20:03:32 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:35.883825 | orchestrator | 2025-06-05 20:03:35 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:35.883932 | orchestrator | 2025-06-05 20:03:35 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:38.925601 | orchestrator | 2025-06-05 20:03:38 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state STARTED 2025-06-05 20:03:38.925708 | orchestrator | 2025-06-05 20:03:38 | INFO  | Wait 1 second(s) until the next check 2025-06-05 20:03:41.974698 | orchestrator | 2025-06-05 20:03:41 | INFO  | Task 971d0f3f-ee1b-4895-a67b-5bfb6cb0fc64 is in state SUCCESS 2025-06-05 20:03:41.976544 | orchestrator | 2025-06-05 20:03:41.976593 | orchestrator | None 2025-06-05 20:03:41.976606 | orchestrator | 2025-06-05 20:03:41.976617 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 20:03:41.976628 | orchestrator | 2025-06-05 20:03:41.976637 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-05 20:03:41.976647 | orchestrator | Thursday 05 June 2025 19:55:09 +0000 (0:00:00.502) 0:00:00.502 ********* 2025-06-05 20:03:41.976657 | orchestrator | changed: [testbed-manager] 2025-06-05 20:03:41.976668 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.976678 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.976688 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.976775 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.976787 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.976797 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.976808 | orchestrator | 2025-06-05 20:03:41.976818 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 20:03:41.976829 | orchestrator | Thursday 05 June 2025 19:55:11 +0000 (0:00:01.790) 0:00:02.292 ********* 2025-06-05 20:03:41.976839 | orchestrator | changed: [testbed-manager] 2025-06-05 20:03:41.976850 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.976861 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.976870 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.976880 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.976890 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.976900 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.976911 | orchestrator | 2025-06-05 20:03:41.976921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 20:03:41.976931 | orchestrator | Thursday 05 June 2025 19:55:12 +0000 (0:00:00.962) 0:00:03.255 ********* 2025-06-05 20:03:41.976940 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-05 20:03:41.976949 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-05 20:03:41.976958 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-05 20:03:41.976995 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-05 20:03:41.977006 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-05 20:03:41.977114 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-05 20:03:41.977221 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-05 20:03:41.977233 | orchestrator | 2025-06-05 20:03:41.977243 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-05 20:03:41.977253 | orchestrator | 2025-06-05 20:03:41.977265 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-05 20:03:41.977292 | orchestrator | Thursday 05 June 2025 19:55:13 +0000 (0:00:00.958) 0:00:04.213 ********* 2025-06-05 20:03:41.977305 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.977316 | orchestrator | 2025-06-05 20:03:41.977336 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-05 20:03:41.977347 | orchestrator | Thursday 05 June 2025 19:55:14 +0000 (0:00:00.920) 0:00:05.134 ********* 2025-06-05 20:03:41.977359 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-05 20:03:41.977389 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-05 20:03:41.977400 | orchestrator | 2025-06-05 20:03:41.977411 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-05 20:03:41.977421 | orchestrator | Thursday 05 June 2025 19:55:19 +0000 (0:00:04.720) 0:00:09.855 ********* 2025-06-05 20:03:41.977431 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 20:03:41.977457 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-05 20:03:41.977469 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.977479 | orchestrator | 2025-06-05 20:03:41.977489 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-05 20:03:41.977499 | orchestrator | Thursday 05 June 2025 19:55:23 +0000 (0:00:04.237) 0:00:14.092 ********* 2025-06-05 20:03:41.977508 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.977518 | orchestrator | 2025-06-05 20:03:41.977527 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-05 20:03:41.977537 | orchestrator | Thursday 05 June 2025 19:55:24 +0000 (0:00:00.898) 0:00:14.991 ********* 2025-06-05 20:03:41.977547 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.977556 | orchestrator | 2025-06-05 20:03:41.977566 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-05 20:03:41.977576 | orchestrator | Thursday 05 June 2025 19:55:25 +0000 (0:00:01.182) 0:00:16.174 ********* 2025-06-05 20:03:41.977586 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.977596 | orchestrator | 2025-06-05 20:03:41.977606 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-05 20:03:41.977616 | orchestrator | Thursday 05 June 2025 19:55:27 +0000 (0:00:02.097) 0:00:18.271 ********* 2025-06-05 20:03:41.977625 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.977635 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.977644 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.977654 | orchestrator | 2025-06-05 20:03:41.977663 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-05 20:03:41.977672 | orchestrator | Thursday 05 June 2025 19:55:27 +0000 (0:00:00.239) 0:00:18.510 ********* 2025-06-05 20:03:41.977682 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.977691 | orchestrator | 2025-06-05 20:03:41.977699 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-05 20:03:41.977709 | orchestrator | Thursday 05 June 2025 19:55:59 +0000 (0:00:31.801) 0:00:50.311 ********* 2025-06-05 20:03:41.977718 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.977727 | orchestrator | 2025-06-05 20:03:41.977737 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-05 20:03:41.977772 | orchestrator | Thursday 05 June 2025 19:56:15 +0000 (0:00:15.366) 0:01:05.678 ********* 2025-06-05 20:03:41.977781 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.977803 | orchestrator | 2025-06-05 20:03:41.977813 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-05 20:03:41.977823 | orchestrator | Thursday 05 June 2025 19:56:27 +0000 (0:00:12.157) 0:01:17.835 ********* 2025-06-05 20:03:41.977847 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.977857 | orchestrator | 2025-06-05 20:03:41.977866 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-05 20:03:41.977876 | orchestrator | Thursday 05 June 2025 19:56:28 +0000 (0:00:00.885) 0:01:18.721 ********* 2025-06-05 20:03:41.977885 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.977897 | orchestrator | 2025-06-05 20:03:41.977906 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-05 20:03:41.977916 | orchestrator | Thursday 05 June 2025 19:56:28 +0000 (0:00:00.428) 0:01:19.149 ********* 2025-06-05 20:03:41.977925 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.977935 | orchestrator | 2025-06-05 20:03:41.977945 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-05 20:03:41.977954 | orchestrator | Thursday 05 June 2025 19:56:29 +0000 (0:00:00.479) 0:01:19.628 ********* 2025-06-05 20:03:41.977964 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.977973 | orchestrator | 2025-06-05 20:03:41.977982 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-05 20:03:41.977992 | orchestrator | Thursday 05 June 2025 19:56:48 +0000 (0:00:19.255) 0:01:38.883 ********* 2025-06-05 20:03:41.978001 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.978011 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978082 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978094 | orchestrator | 2025-06-05 20:03:41.978103 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-05 20:03:41.978112 | orchestrator | 2025-06-05 20:03:41.978121 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-05 20:03:41.978131 | orchestrator | Thursday 05 June 2025 19:56:48 +0000 (0:00:00.312) 0:01:39.196 ********* 2025-06-05 20:03:41.978142 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.978152 | orchestrator | 2025-06-05 20:03:41.978162 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-05 20:03:41.978172 | orchestrator | Thursday 05 June 2025 19:56:49 +0000 (0:00:00.553) 0:01:39.750 ********* 2025-06-05 20:03:41.978181 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978191 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978201 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.978211 | orchestrator | 2025-06-05 20:03:41.978221 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-05 20:03:41.978231 | orchestrator | Thursday 05 June 2025 19:56:51 +0000 (0:00:02.520) 0:01:42.270 ********* 2025-06-05 20:03:41.978240 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978250 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978259 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.978268 | orchestrator | 2025-06-05 20:03:41.978278 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-05 20:03:41.978288 | orchestrator | Thursday 05 June 2025 19:56:54 +0000 (0:00:02.544) 0:01:44.814 ********* 2025-06-05 20:03:41.978298 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.978308 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978317 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978327 | orchestrator | 2025-06-05 20:03:41.978337 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-05 20:03:41.978347 | orchestrator | Thursday 05 June 2025 19:56:54 +0000 (0:00:00.320) 0:01:45.135 ********* 2025-06-05 20:03:41.978356 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-05 20:03:41.978366 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978385 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-05 20:03:41.978407 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978418 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-05 20:03:41.978428 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-05 20:03:41.978437 | orchestrator | 2025-06-05 20:03:41.978448 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-05 20:03:41.978458 | orchestrator | Thursday 05 June 2025 19:57:01 +0000 (0:00:07.460) 0:01:52.595 ********* 2025-06-05 20:03:41.978467 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.978476 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978485 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978494 | orchestrator | 2025-06-05 20:03:41.978504 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-05 20:03:41.978514 | orchestrator | Thursday 05 June 2025 19:57:02 +0000 (0:00:00.323) 0:01:52.919 ********* 2025-06-05 20:03:41.978523 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-05 20:03:41.978533 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.978542 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-05 20:03:41.978552 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978561 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-05 20:03:41.978571 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978580 | orchestrator | 2025-06-05 20:03:41.978590 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-05 20:03:41.978600 | orchestrator | Thursday 05 June 2025 19:57:02 +0000 (0:00:00.560) 0:01:53.479 ********* 2025-06-05 20:03:41.978610 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978620 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978630 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.978639 | orchestrator | 2025-06-05 20:03:41.978648 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-05 20:03:41.978658 | orchestrator | Thursday 05 June 2025 19:57:03 +0000 (0:00:00.505) 0:01:53.985 ********* 2025-06-05 20:03:41.978668 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978677 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978688 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.978697 | orchestrator | 2025-06-05 20:03:41.978707 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-05 20:03:41.978717 | orchestrator | Thursday 05 June 2025 19:57:04 +0000 (0:00:01.055) 0:01:55.041 ********* 2025-06-05 20:03:41.978726 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978800 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978812 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.978822 | orchestrator | 2025-06-05 20:03:41.978831 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-05 20:03:41.978841 | orchestrator | Thursday 05 June 2025 19:57:06 +0000 (0:00:01.973) 0:01:57.015 ********* 2025-06-05 20:03:41.978850 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978859 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978869 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.978879 | orchestrator | 2025-06-05 20:03:41.978889 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-05 20:03:41.978898 | orchestrator | Thursday 05 June 2025 19:57:28 +0000 (0:00:21.682) 0:02:18.697 ********* 2025-06-05 20:03:41.978908 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978917 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.978927 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.978938 | orchestrator | 2025-06-05 20:03:41.978948 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-05 20:03:41.978957 | orchestrator | Thursday 05 June 2025 19:57:39 +0000 (0:00:11.793) 0:02:30.491 ********* 2025-06-05 20:03:41.978967 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.978977 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.978985 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.979004 | orchestrator | 2025-06-05 20:03:41.979013 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-05 20:03:41.979024 | orchestrator | Thursday 05 June 2025 19:57:40 +0000 (0:00:00.843) 0:02:31.334 ********* 2025-06-05 20:03:41.979034 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.979043 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.979051 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.979060 | orchestrator | 2025-06-05 20:03:41.979068 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-05 20:03:41.979077 | orchestrator | Thursday 05 June 2025 19:57:52 +0000 (0:00:12.165) 0:02:43.500 ********* 2025-06-05 20:03:41.979085 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.979093 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.979102 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.979110 | orchestrator | 2025-06-05 20:03:41.979118 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-05 20:03:41.979127 | orchestrator | Thursday 05 June 2025 19:57:54 +0000 (0:00:01.272) 0:02:44.773 ********* 2025-06-05 20:03:41.979136 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.979144 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.979153 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.979161 | orchestrator | 2025-06-05 20:03:41.979170 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-05 20:03:41.979179 | orchestrator | 2025-06-05 20:03:41.979187 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-05 20:03:41.979196 | orchestrator | Thursday 05 June 2025 19:57:54 +0000 (0:00:00.286) 0:02:45.059 ********* 2025-06-05 20:03:41.979205 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.979275 | orchestrator | 2025-06-05 20:03:41.979284 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-05 20:03:41.979292 | orchestrator | Thursday 05 June 2025 19:57:54 +0000 (0:00:00.542) 0:02:45.602 ********* 2025-06-05 20:03:41.979316 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-05 20:03:41.979325 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-05 20:03:41.979333 | orchestrator | 2025-06-05 20:03:41.979348 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-05 20:03:41.979356 | orchestrator | Thursday 05 June 2025 19:57:58 +0000 (0:00:03.598) 0:02:49.201 ********* 2025-06-05 20:03:41.979365 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-05 20:03:41.979374 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-05 20:03:41.979382 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-05 20:03:41.979391 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-05 20:03:41.979399 | orchestrator | 2025-06-05 20:03:41.979521 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-05 20:03:41.979532 | orchestrator | Thursday 05 June 2025 19:58:05 +0000 (0:00:07.032) 0:02:56.234 ********* 2025-06-05 20:03:41.979541 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-05 20:03:41.979550 | orchestrator | 2025-06-05 20:03:41.979559 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-05 20:03:41.979587 | orchestrator | Thursday 05 June 2025 19:58:08 +0000 (0:00:03.337) 0:02:59.572 ********* 2025-06-05 20:03:41.979597 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-05 20:03:41.979605 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-05 20:03:41.979614 | orchestrator | 2025-06-05 20:03:41.979622 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-05 20:03:41.979642 | orchestrator | Thursday 05 June 2025 19:58:12 +0000 (0:00:03.829) 0:03:03.401 ********* 2025-06-05 20:03:41.979650 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-05 20:03:41.979659 | orchestrator | 2025-06-05 20:03:41.979668 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-05 20:03:41.979677 | orchestrator | Thursday 05 June 2025 19:58:16 +0000 (0:00:03.419) 0:03:06.820 ********* 2025-06-05 20:03:41.979686 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-05 20:03:41.979696 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-05 20:03:41.979705 | orchestrator | 2025-06-05 20:03:41.979715 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-05 20:03:41.979734 | orchestrator | Thursday 05 June 2025 19:58:23 +0000 (0:00:07.728) 0:03:14.549 ********* 2025-06-05 20:03:41.979795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.979816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.979827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.979856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.979867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.979876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.979885 | orchestrator | 2025-06-05 20:03:41.979894 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-05 20:03:41.979903 | orchestrator | Thursday 05 June 2025 19:58:25 +0000 (0:00:01.333) 0:03:15.882 ********* 2025-06-05 20:03:41.979911 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.979920 | orchestrator | 2025-06-05 20:03:41.979928 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-05 20:03:41.979937 | orchestrator | Thursday 05 June 2025 19:58:25 +0000 (0:00:00.140) 0:03:16.023 ********* 2025-06-05 20:03:41.979945 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.979954 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.979962 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.979971 | orchestrator | 2025-06-05 20:03:41.979980 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-05 20:03:41.979988 | orchestrator | Thursday 05 June 2025 19:58:25 +0000 (0:00:00.549) 0:03:16.572 ********* 2025-06-05 20:03:41.979996 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-05 20:03:41.980004 | orchestrator | 2025-06-05 20:03:41.980012 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-05 20:03:41.980020 | orchestrator | Thursday 05 June 2025 19:58:26 +0000 (0:00:00.727) 0:03:17.300 ********* 2025-06-05 20:03:41.980028 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.980041 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.980050 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.980059 | orchestrator | 2025-06-05 20:03:41.980075 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-05 20:03:41.980084 | orchestrator | Thursday 05 June 2025 19:58:26 +0000 (0:00:00.292) 0:03:17.593 ********* 2025-06-05 20:03:41.980092 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.980101 | orchestrator | 2025-06-05 20:03:41.980110 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-05 20:03:41.980119 | orchestrator | Thursday 05 June 2025 19:58:27 +0000 (0:00:00.735) 0:03:18.328 ********* 2025-06-05 20:03:41.980136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980212 | orchestrator | 2025-06-05 20:03:41.980220 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-05 20:03:41.980228 | orchestrator | Thursday 05 June 2025 19:58:30 +0000 (0:00:02.490) 0:03:20.818 ********* 2025-06-05 20:03:41.980238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980269 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.980282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980305 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.980314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980338 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.980348 | orchestrator | 2025-06-05 20:03:41.980356 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-05 20:03:41.980365 | orchestrator | Thursday 05 June 2025 19:58:30 +0000 (0:00:00.573) 0:03:21.391 ********* 2025-06-05 20:03:41.980378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980449 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.980466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980497 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.980510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980528 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.980537 | orchestrator | 2025-06-05 20:03:41.980546 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-05 20:03:41.980554 | orchestrator | Thursday 05 June 2025 19:58:31 +0000 (0:00:00.931) 0:03:22.323 ********* 2025-06-05 20:03:41.980569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980648 | orchestrator | 2025-06-05 20:03:41.980656 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-05 20:03:41.980664 | orchestrator | Thursday 05 June 2025 19:58:34 +0000 (0:00:02.342) 0:03:24.665 ********* 2025-06-05 20:03:41.980677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.980717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.980773 | orchestrator | 2025-06-05 20:03:41.980782 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-05 20:03:41.980791 | orchestrator | Thursday 05 June 2025 19:58:39 +0000 (0:00:05.455) 0:03:30.120 ********* 2025-06-05 20:03:41.980806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980825 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.980840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980864 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.980873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-05 20:03:41.980889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.980898 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.980907 | orchestrator | 2025-06-05 20:03:41.980916 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-05 20:03:41.980929 | orchestrator | Thursday 05 June 2025 19:58:40 +0000 (0:00:00.545) 0:03:30.666 ********* 2025-06-05 20:03:41.980937 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.980944 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.980951 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.980959 | orchestrator | 2025-06-05 20:03:41.980966 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-05 20:03:41.980974 | orchestrator | Thursday 05 June 2025 19:58:42 +0000 (0:00:02.074) 0:03:32.740 ********* 2025-06-05 20:03:41.980983 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.980990 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.980998 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.981006 | orchestrator | 2025-06-05 20:03:41.981013 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-05 20:03:41.981021 | orchestrator | Thursday 05 June 2025 19:58:42 +0000 (0:00:00.274) 0:03:33.015 ********* 2025-06-05 20:03:41.981030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.981045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.981062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-05 20:03:41.981078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.981087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.981098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.981107 | orchestrator | 2025-06-05 20:03:41.981116 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-05 20:03:41.981123 | orchestrator | Thursday 05 June 2025 19:58:44 +0000 (0:00:01.901) 0:03:34.917 ********* 2025-06-05 20:03:41.981131 | orchestrator | 2025-06-05 20:03:41.981140 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-05 20:03:41.981148 | orchestrator | Thursday 05 June 2025 19:58:44 +0000 (0:00:00.118) 0:03:35.036 ********* 2025-06-05 20:03:41.981157 | orchestrator | 2025-06-05 20:03:41.981166 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-05 20:03:41.981174 | orchestrator | Thursday 05 June 2025 19:58:44 +0000 (0:00:00.114) 0:03:35.150 ********* 2025-06-05 20:03:41.981182 | orchestrator | 2025-06-05 20:03:41.981191 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-05 20:03:41.981200 | orchestrator | Thursday 05 June 2025 19:58:44 +0000 (0:00:00.193) 0:03:35.344 ********* 2025-06-05 20:03:41.981208 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.981216 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.981224 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.981232 | orchestrator | 2025-06-05 20:03:41.981241 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-05 20:03:41.981250 | orchestrator | Thursday 05 June 2025 19:59:09 +0000 (0:00:25.017) 0:04:00.361 ********* 2025-06-05 20:03:41.981264 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.981273 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.981281 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.981290 | orchestrator | 2025-06-05 20:03:41.981299 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-05 20:03:41.981307 | orchestrator | 2025-06-05 20:03:41.981316 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-05 20:03:41.981324 | orchestrator | Thursday 05 June 2025 19:59:15 +0000 (0:00:05.507) 0:04:05.868 ********* 2025-06-05 20:03:41.981334 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.981344 | orchestrator | 2025-06-05 20:03:41.981357 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-05 20:03:41.981366 | orchestrator | Thursday 05 June 2025 19:59:16 +0000 (0:00:01.168) 0:04:07.036 ********* 2025-06-05 20:03:41.981375 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.981383 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.981392 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.981401 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.981409 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.981418 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.981427 | orchestrator | 2025-06-05 20:03:41.981435 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-05 20:03:41.981444 | orchestrator | Thursday 05 June 2025 19:59:17 +0000 (0:00:00.713) 0:04:07.750 ********* 2025-06-05 20:03:41.981452 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.981460 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.981469 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.981477 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 20:03:41.981486 | orchestrator | 2025-06-05 20:03:41.981494 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-05 20:03:41.981503 | orchestrator | Thursday 05 June 2025 19:59:18 +0000 (0:00:01.030) 0:04:08.781 ********* 2025-06-05 20:03:41.981513 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-05 20:03:41.981521 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-05 20:03:41.981530 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-05 20:03:41.981538 | orchestrator | 2025-06-05 20:03:41.981547 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-05 20:03:41.981556 | orchestrator | Thursday 05 June 2025 19:59:18 +0000 (0:00:00.664) 0:04:09.445 ********* 2025-06-05 20:03:41.981564 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-05 20:03:41.981573 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-05 20:03:41.981581 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-05 20:03:41.981590 | orchestrator | 2025-06-05 20:03:41.981598 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-05 20:03:41.981607 | orchestrator | Thursday 05 June 2025 19:59:19 +0000 (0:00:01.113) 0:04:10.559 ********* 2025-06-05 20:03:41.981615 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-05 20:03:41.981625 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.981633 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-05 20:03:41.981642 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.981650 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-05 20:03:41.981659 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.981668 | orchestrator | 2025-06-05 20:03:41.981676 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-05 20:03:41.981684 | orchestrator | Thursday 05 June 2025 19:59:20 +0000 (0:00:00.695) 0:04:11.255 ********* 2025-06-05 20:03:41.981693 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 20:03:41.981708 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 20:03:41.981716 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.981723 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 20:03:41.981731 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 20:03:41.981790 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.981805 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-05 20:03:41.981813 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-05 20:03:41.981822 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-05 20:03:41.981830 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-05 20:03:41.981838 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.981846 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-05 20:03:41.981854 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-05 20:03:41.981863 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-05 20:03:41.981870 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-05 20:03:41.981879 | orchestrator | 2025-06-05 20:03:41.981887 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-05 20:03:41.981895 | orchestrator | Thursday 05 June 2025 19:59:21 +0000 (0:00:01.074) 0:04:12.330 ********* 2025-06-05 20:03:41.981904 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.981912 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.981921 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.981930 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.981939 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.981947 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.981955 | orchestrator | 2025-06-05 20:03:41.981963 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-05 20:03:41.981972 | orchestrator | Thursday 05 June 2025 19:59:23 +0000 (0:00:01.311) 0:04:13.642 ********* 2025-06-05 20:03:41.981980 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.981988 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.981997 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.982005 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.982013 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.982053 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.982062 | orchestrator | 2025-06-05 20:03:41.982071 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-05 20:03:41.982080 | orchestrator | Thursday 05 June 2025 19:59:25 +0000 (0:00:01.966) 0:04:15.609 ********* 2025-06-05 20:03:41.982099 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982110 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982151 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982166 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982241 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982284 | orchestrator | 2025-06-05 20:03:41.982293 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-05 20:03:41.982307 | orchestrator | Thursday 05 June 2025 19:59:27 +0000 (0:00:02.495) 0:04:18.104 ********* 2025-06-05 20:03:41.982316 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:03:41.982325 | orchestrator | 2025-06-05 20:03:41.982334 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-05 20:03:41.982344 | orchestrator | Thursday 05 June 2025 19:59:28 +0000 (0:00:01.190) 0:04:19.294 ********* 2025-06-05 20:03:41.982354 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982877 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.982992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983004 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983080 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983138 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983153 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.983162 | orchestrator | 2025-06-05 20:03:41.983171 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-05 20:03:41.983179 | orchestrator | Thursday 05 June 2025 19:59:32 +0000 (0:00:03.526) 0:04:22.821 ********* 2025-06-05 20:03:41.983188 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983218 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.983250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983286 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.983299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983318 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983332 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.983364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983383 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.983391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983412 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.983420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983442 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.983450 | orchestrator | 2025-06-05 20:03:41.983458 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-05 20:03:41.983466 | orchestrator | Thursday 05 June 2025 19:59:33 +0000 (0:00:01.715) 0:04:24.536 ********* 2025-06-05 20:03:41.983499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983509 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983527 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.983540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983548 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983595 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.983603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.983611 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.983623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983631 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.983639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983660 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.983690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983707 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.983715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.983723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.983731 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.983794 | orchestrator | 2025-06-05 20:03:41.983809 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-05 20:03:41.983818 | orchestrator | Thursday 05 June 2025 19:59:35 +0000 (0:00:01.895) 0:04:26.432 ********* 2025-06-05 20:03:41.983827 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.983835 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.983849 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.983857 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-05 20:03:41.983864 | orchestrator | 2025-06-05 20:03:41.983872 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-05 20:03:41.983880 | orchestrator | Thursday 05 June 2025 19:59:36 +0000 (0:00:00.865) 0:04:27.298 ********* 2025-06-05 20:03:41.983887 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 20:03:41.983895 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-05 20:03:41.983903 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-05 20:03:41.983911 | orchestrator | 2025-06-05 20:03:41.983918 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-05 20:03:41.983925 | orchestrator | Thursday 05 June 2025 19:59:37 +0000 (0:00:01.029) 0:04:28.327 ********* 2025-06-05 20:03:41.983933 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 20:03:41.983941 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-05 20:03:41.983949 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-05 20:03:41.983957 | orchestrator | 2025-06-05 20:03:41.983965 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-05 20:03:41.983973 | orchestrator | Thursday 05 June 2025 19:59:38 +0000 (0:00:00.911) 0:04:29.239 ********* 2025-06-05 20:03:41.983981 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:03:41.983989 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:03:41.983997 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:03:41.984005 | orchestrator | 2025-06-05 20:03:41.984013 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-05 20:03:41.984021 | orchestrator | Thursday 05 June 2025 19:59:39 +0000 (0:00:00.490) 0:04:29.730 ********* 2025-06-05 20:03:41.984029 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:03:41.984037 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:03:41.984045 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:03:41.984053 | orchestrator | 2025-06-05 20:03:41.984060 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-05 20:03:41.984067 | orchestrator | Thursday 05 June 2025 19:59:39 +0000 (0:00:00.505) 0:04:30.236 ********* 2025-06-05 20:03:41.984075 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-05 20:03:41.984117 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-05 20:03:41.984127 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-05 20:03:41.984134 | orchestrator | 2025-06-05 20:03:41.984142 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-05 20:03:41.984150 | orchestrator | Thursday 05 June 2025 19:59:40 +0000 (0:00:01.335) 0:04:31.572 ********* 2025-06-05 20:03:41.984158 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-05 20:03:41.984166 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-05 20:03:41.984174 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-05 20:03:41.984182 | orchestrator | 2025-06-05 20:03:41.984190 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-05 20:03:41.984199 | orchestrator | Thursday 05 June 2025 19:59:42 +0000 (0:00:01.151) 0:04:32.723 ********* 2025-06-05 20:03:41.984206 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-05 20:03:41.984214 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-05 20:03:41.984222 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-05 20:03:41.984231 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-05 20:03:41.984239 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-05 20:03:41.984247 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-05 20:03:41.984254 | orchestrator | 2025-06-05 20:03:41.984262 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-05 20:03:41.984269 | orchestrator | Thursday 05 June 2025 19:59:45 +0000 (0:00:03.448) 0:04:36.171 ********* 2025-06-05 20:03:41.984286 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.984294 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.984301 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.984309 | orchestrator | 2025-06-05 20:03:41.984317 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-05 20:03:41.984324 | orchestrator | Thursday 05 June 2025 19:59:45 +0000 (0:00:00.296) 0:04:36.467 ********* 2025-06-05 20:03:41.984332 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.984340 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.984348 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.984356 | orchestrator | 2025-06-05 20:03:41.984364 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-05 20:03:41.984372 | orchestrator | Thursday 05 June 2025 19:59:46 +0000 (0:00:00.285) 0:04:36.753 ********* 2025-06-05 20:03:41.984380 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.984388 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.984396 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.984404 | orchestrator | 2025-06-05 20:03:41.984412 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-05 20:03:41.984420 | orchestrator | Thursday 05 June 2025 19:59:47 +0000 (0:00:01.505) 0:04:38.259 ********* 2025-06-05 20:03:41.984429 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-05 20:03:41.984439 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-05 20:03:41.984448 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-05 20:03:41.984466 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-05 20:03:41.984474 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-05 20:03:41.984481 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-05 20:03:41.984489 | orchestrator | 2025-06-05 20:03:41.984498 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-05 20:03:41.984507 | orchestrator | Thursday 05 June 2025 19:59:50 +0000 (0:00:03.188) 0:04:41.447 ********* 2025-06-05 20:03:41.984517 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 20:03:41.984524 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 20:03:41.984532 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 20:03:41.984540 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-05 20:03:41.984548 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.984556 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-05 20:03:41.984564 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.984571 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-05 20:03:41.984578 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.984586 | orchestrator | 2025-06-05 20:03:41.984593 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-05 20:03:41.984601 | orchestrator | Thursday 05 June 2025 19:59:54 +0000 (0:00:03.316) 0:04:44.764 ********* 2025-06-05 20:03:41.984608 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.984616 | orchestrator | 2025-06-05 20:03:41.984624 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-05 20:03:41.984632 | orchestrator | Thursday 05 June 2025 19:59:54 +0000 (0:00:00.144) 0:04:44.908 ********* 2025-06-05 20:03:41.984640 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.984647 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.984655 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.984669 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.984677 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.984685 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.984693 | orchestrator | 2025-06-05 20:03:41.984700 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-05 20:03:41.984769 | orchestrator | Thursday 05 June 2025 19:59:55 +0000 (0:00:00.767) 0:04:45.675 ********* 2025-06-05 20:03:41.984781 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-05 20:03:41.984789 | orchestrator | 2025-06-05 20:03:41.984798 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-05 20:03:41.984806 | orchestrator | Thursday 05 June 2025 19:59:55 +0000 (0:00:00.704) 0:04:46.380 ********* 2025-06-05 20:03:41.984814 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.984822 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.984830 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.984838 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.984846 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.984854 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.984861 | orchestrator | 2025-06-05 20:03:41.984870 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-05 20:03:41.984879 | orchestrator | Thursday 05 June 2025 19:59:56 +0000 (0:00:00.549) 0:04:46.929 ********* 2025-06-05 20:03:41.984889 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984913 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984961 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.984991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985031 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985052 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985066 | orchestrator | 2025-06-05 20:03:41.985074 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-05 20:03:41.985082 | orchestrator | Thursday 05 June 2025 20:00:00 +0000 (0:00:03.847) 0:04:50.777 ********* 2025-06-05 20:03:41.985090 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.985102 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.985111 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.985119 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.985130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.985144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.985157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985166 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.985246 | orchestrator | 2025-06-05 20:03:41.985255 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-05 20:03:41.985263 | orchestrator | Thursday 05 June 2025 20:00:06 +0000 (0:00:06.221) 0:04:56.998 ********* 2025-06-05 20:03:41.985271 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.985280 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.985288 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.985395 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.985407 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.985413 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.985420 | orchestrator | 2025-06-05 20:03:41.985426 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-05 20:03:41.985433 | orchestrator | Thursday 05 June 2025 20:00:07 +0000 (0:00:01.438) 0:04:58.436 ********* 2025-06-05 20:03:41.985440 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-05 20:03:41.985456 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-05 20:03:41.985464 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-05 20:03:41.985472 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-05 20:03:41.985485 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-05 20:03:41.985494 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.985501 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-05 20:03:41.985510 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-05 20:03:41.985517 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-05 20:03:41.985525 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.985533 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-05 20:03:41.985542 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.985550 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-05 20:03:41.985559 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-05 20:03:41.985567 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-05 20:03:41.985576 | orchestrator | 2025-06-05 20:03:41.985585 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-05 20:03:41.985594 | orchestrator | Thursday 05 June 2025 20:00:11 +0000 (0:00:03.503) 0:05:01.940 ********* 2025-06-05 20:03:41.985602 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.985610 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.985619 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.985628 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.985637 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.985645 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.985654 | orchestrator | 2025-06-05 20:03:41.985663 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-05 20:03:41.985671 | orchestrator | Thursday 05 June 2025 20:00:12 +0000 (0:00:00.779) 0:05:02.720 ********* 2025-06-05 20:03:41.985680 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-05 20:03:41.985689 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-05 20:03:41.985704 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-05 20:03:41.985714 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-05 20:03:41.985722 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-05 20:03:41.985729 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985736 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985764 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-05 20:03:41.985771 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985780 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985787 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.985800 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985809 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.985816 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-05 20:03:41.985823 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.985830 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985837 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985844 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985851 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985858 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985866 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-05 20:03:41.985875 | orchestrator | 2025-06-05 20:03:41.985883 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-05 20:03:41.985890 | orchestrator | Thursday 05 June 2025 20:00:16 +0000 (0:00:04.671) 0:05:07.391 ********* 2025-06-05 20:03:41.985898 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 20:03:41.985906 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 20:03:41.985923 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-05 20:03:41.985931 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 20:03:41.985938 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-05 20:03:41.985946 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-05 20:03:41.985953 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 20:03:41.985961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-05 20:03:41.985968 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-05 20:03:41.985976 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 20:03:41.985984 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 20:03:41.985991 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-05 20:03:41.985998 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-05 20:03:41.986006 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986134 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 20:03:41.986145 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-05 20:03:41.986154 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986163 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-05 20:03:41.986171 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986179 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 20:03:41.986187 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-05 20:03:41.986195 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 20:03:41.986212 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 20:03:41.986229 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-05 20:03:41.986237 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 20:03:41.986245 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 20:03:41.986253 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-05 20:03:41.986261 | orchestrator | 2025-06-05 20:03:41.986269 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-05 20:03:41.986278 | orchestrator | Thursday 05 June 2025 20:00:23 +0000 (0:00:06.850) 0:05:14.242 ********* 2025-06-05 20:03:41.986286 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.986294 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.986302 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.986310 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986318 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986326 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986333 | orchestrator | 2025-06-05 20:03:41.986341 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-05 20:03:41.986348 | orchestrator | Thursday 05 June 2025 20:00:24 +0000 (0:00:00.540) 0:05:14.783 ********* 2025-06-05 20:03:41.986356 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.986363 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.986371 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.986378 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986385 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986392 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986399 | orchestrator | 2025-06-05 20:03:41.986406 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-05 20:03:41.986414 | orchestrator | Thursday 05 June 2025 20:00:25 +0000 (0:00:00.833) 0:05:15.617 ********* 2025-06-05 20:03:41.986422 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986430 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986437 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.986445 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986453 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.986462 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.986470 | orchestrator | 2025-06-05 20:03:41.986476 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-05 20:03:41.986483 | orchestrator | Thursday 05 June 2025 20:00:26 +0000 (0:00:01.717) 0:05:17.334 ********* 2025-06-05 20:03:41.986499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.986508 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.986522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.986544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.986551 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.986559 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986567 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.986596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-05 20:03:41.986609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-05 20:03:41.986630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986637 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.986645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.986653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.986667 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986689 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-05 20:03:41.986709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-05 20:03:41.986717 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986725 | orchestrator | 2025-06-05 20:03:41.986732 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-05 20:03:41.986784 | orchestrator | Thursday 05 June 2025 20:00:28 +0000 (0:00:01.605) 0:05:18.940 ********* 2025-06-05 20:03:41.986794 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-05 20:03:41.986802 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986810 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.986818 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-05 20:03:41.986825 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986833 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.986840 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-05 20:03:41.986848 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986856 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.986863 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-05 20:03:41.986871 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986878 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.986886 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-05 20:03:41.986894 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986902 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.986909 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-05 20:03:41.986916 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-05 20:03:41.986923 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.986931 | orchestrator | 2025-06-05 20:03:41.986938 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-05 20:03:41.987001 | orchestrator | Thursday 05 June 2025 20:00:28 +0000 (0:00:00.610) 0:05:19.551 ********* 2025-06-05 20:03:41.987011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987035 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987173 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-05 20:03:41.987181 | orchestrator | 2025-06-05 20:03:41.987189 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-05 20:03:41.987197 | orchestrator | Thursday 05 June 2025 20:00:31 +0000 (0:00:02.927) 0:05:22.478 ********* 2025-06-05 20:03:41.987204 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.987212 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.987219 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.987231 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.987238 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.987246 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.987254 | orchestrator | 2025-06-05 20:03:41.987261 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987268 | orchestrator | Thursday 05 June 2025 20:00:32 +0000 (0:00:00.480) 0:05:22.959 ********* 2025-06-05 20:03:41.987276 | orchestrator | 2025-06-05 20:03:41.987283 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987291 | orchestrator | Thursday 05 June 2025 20:00:32 +0000 (0:00:00.235) 0:05:23.194 ********* 2025-06-05 20:03:41.987298 | orchestrator | 2025-06-05 20:03:41.987305 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987313 | orchestrator | Thursday 05 June 2025 20:00:32 +0000 (0:00:00.116) 0:05:23.311 ********* 2025-06-05 20:03:41.987321 | orchestrator | 2025-06-05 20:03:41.987329 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987342 | orchestrator | Thursday 05 June 2025 20:00:32 +0000 (0:00:00.136) 0:05:23.447 ********* 2025-06-05 20:03:41.987350 | orchestrator | 2025-06-05 20:03:41.987357 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987365 | orchestrator | Thursday 05 June 2025 20:00:32 +0000 (0:00:00.135) 0:05:23.583 ********* 2025-06-05 20:03:41.987373 | orchestrator | 2025-06-05 20:03:41.987381 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-05 20:03:41.987388 | orchestrator | Thursday 05 June 2025 20:00:33 +0000 (0:00:00.113) 0:05:23.696 ********* 2025-06-05 20:03:41.987396 | orchestrator | 2025-06-05 20:03:41.987403 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-05 20:03:41.987410 | orchestrator | Thursday 05 June 2025 20:00:33 +0000 (0:00:00.116) 0:05:23.813 ********* 2025-06-05 20:03:41.987417 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.987425 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.987432 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.987440 | orchestrator | 2025-06-05 20:03:41.987447 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-05 20:03:41.987455 | orchestrator | Thursday 05 June 2025 20:00:44 +0000 (0:00:11.607) 0:05:35.421 ********* 2025-06-05 20:03:41.987463 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.987471 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.987478 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.987486 | orchestrator | 2025-06-05 20:03:41.987493 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-05 20:03:41.987501 | orchestrator | Thursday 05 June 2025 20:01:01 +0000 (0:00:16.712) 0:05:52.133 ********* 2025-06-05 20:03:41.987508 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.987517 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.987524 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.987532 | orchestrator | 2025-06-05 20:03:41.987540 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-05 20:03:41.987547 | orchestrator | Thursday 05 June 2025 20:01:23 +0000 (0:00:22.359) 0:06:14.492 ********* 2025-06-05 20:03:41.987554 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.987562 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.987570 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.987577 | orchestrator | 2025-06-05 20:03:41.987585 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-05 20:03:41.987593 | orchestrator | Thursday 05 June 2025 20:02:06 +0000 (0:00:42.279) 0:06:56.772 ********* 2025-06-05 20:03:41.987600 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.987615 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.987624 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.987631 | orchestrator | 2025-06-05 20:03:41.987638 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-05 20:03:41.987646 | orchestrator | Thursday 05 June 2025 20:02:07 +0000 (0:00:01.035) 0:06:57.807 ********* 2025-06-05 20:03:41.987653 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.987661 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.987699 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.987709 | orchestrator | 2025-06-05 20:03:41.987716 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-05 20:03:41.987724 | orchestrator | Thursday 05 June 2025 20:02:08 +0000 (0:00:00.888) 0:06:58.695 ********* 2025-06-05 20:03:41.987731 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:03:41.987757 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:03:41.987767 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:03:41.987774 | orchestrator | 2025-06-05 20:03:41.987781 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-05 20:03:41.987788 | orchestrator | Thursday 05 June 2025 20:02:32 +0000 (0:00:24.005) 0:07:22.701 ********* 2025-06-05 20:03:41.987795 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.987810 | orchestrator | 2025-06-05 20:03:41.987817 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-05 20:03:41.987824 | orchestrator | Thursday 05 June 2025 20:02:32 +0000 (0:00:00.133) 0:07:22.834 ********* 2025-06-05 20:03:41.987831 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.987838 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.987845 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.987853 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.987860 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.987868 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-05 20:03:41.987876 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 20:03:41.987883 | orchestrator | 2025-06-05 20:03:41.987890 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-05 20:03:41.987898 | orchestrator | Thursday 05 June 2025 20:02:55 +0000 (0:00:22.846) 0:07:45.680 ********* 2025-06-05 20:03:41.987905 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.987912 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.987919 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.987926 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.987942 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.987950 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.987958 | orchestrator | 2025-06-05 20:03:41.987965 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-05 20:03:41.987972 | orchestrator | Thursday 05 June 2025 20:03:02 +0000 (0:00:07.771) 0:07:53.452 ********* 2025-06-05 20:03:41.987980 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.987987 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.987995 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.988003 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.988010 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.988018 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-05 20:03:41.988025 | orchestrator | 2025-06-05 20:03:41.988033 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-05 20:03:41.988040 | orchestrator | Thursday 05 June 2025 20:03:07 +0000 (0:00:04.517) 0:07:57.969 ********* 2025-06-05 20:03:41.988047 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 20:03:41.988054 | orchestrator | 2025-06-05 20:03:41.988062 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-05 20:03:41.988069 | orchestrator | Thursday 05 June 2025 20:03:20 +0000 (0:00:12.896) 0:08:10.866 ********* 2025-06-05 20:03:41.988076 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 20:03:41.988084 | orchestrator | 2025-06-05 20:03:41.988091 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-05 20:03:41.988098 | orchestrator | Thursday 05 June 2025 20:03:21 +0000 (0:00:01.218) 0:08:12.084 ********* 2025-06-05 20:03:41.988105 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.988112 | orchestrator | 2025-06-05 20:03:41.988120 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-05 20:03:41.988127 | orchestrator | Thursday 05 June 2025 20:03:22 +0000 (0:00:01.162) 0:08:13.247 ********* 2025-06-05 20:03:41.988134 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 20:03:41.988141 | orchestrator | 2025-06-05 20:03:41.988148 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-05 20:03:41.988155 | orchestrator | Thursday 05 June 2025 20:03:34 +0000 (0:00:12.146) 0:08:25.393 ********* 2025-06-05 20:03:41.988162 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:03:41.988170 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:03:41.988177 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:03:41.988185 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:03:41.988193 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:03:41.988207 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:03:41.988215 | orchestrator | 2025-06-05 20:03:41.988223 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-05 20:03:41.988230 | orchestrator | 2025-06-05 20:03:41.988237 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-05 20:03:41.988245 | orchestrator | Thursday 05 June 2025 20:03:36 +0000 (0:00:01.754) 0:08:27.147 ********* 2025-06-05 20:03:41.988252 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:03:41.988260 | orchestrator | changed: [testbed-node-1] 2025-06-05 20:03:41.988268 | orchestrator | changed: [testbed-node-2] 2025-06-05 20:03:41.988275 | orchestrator | 2025-06-05 20:03:41.988282 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-05 20:03:41.988290 | orchestrator | 2025-06-05 20:03:41.988298 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-05 20:03:41.988305 | orchestrator | Thursday 05 June 2025 20:03:37 +0000 (0:00:01.306) 0:08:28.454 ********* 2025-06-05 20:03:41.988313 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.988327 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.988334 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.988341 | orchestrator | 2025-06-05 20:03:41.988349 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-05 20:03:41.988358 | orchestrator | 2025-06-05 20:03:41.988367 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-05 20:03:41.988374 | orchestrator | Thursday 05 June 2025 20:03:38 +0000 (0:00:00.559) 0:08:29.013 ********* 2025-06-05 20:03:41.988381 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-05 20:03:41.988389 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-05 20:03:41.988396 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988403 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-05 20:03:41.988410 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-05 20:03:41.988417 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988425 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:03:41.988432 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-05 20:03:41.988439 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-05 20:03:41.988447 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988454 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-05 20:03:41.988461 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-05 20:03:41.988468 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988475 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:03:41.988483 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-05 20:03:41.988490 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-05 20:03:41.988497 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988505 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-05 20:03:41.988513 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-05 20:03:41.988520 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988528 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:03:41.988535 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-05 20:03:41.988549 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-05 20:03:41.988557 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988564 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-05 20:03:41.988572 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-05 20:03:41.988579 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988593 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.988601 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-05 20:03:41.988609 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-05 20:03:41.988616 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988624 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-05 20:03:41.988631 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-05 20:03:41.988637 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988644 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.988651 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-05 20:03:41.988659 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-05 20:03:41.988666 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-05 20:03:41.988673 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-05 20:03:41.988680 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-05 20:03:41.988687 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-05 20:03:41.988695 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.988703 | orchestrator | 2025-06-05 20:03:41.988710 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-05 20:03:41.988718 | orchestrator | 2025-06-05 20:03:41.988725 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-05 20:03:41.988733 | orchestrator | Thursday 05 June 2025 20:03:39 +0000 (0:00:01.285) 0:08:30.299 ********* 2025-06-05 20:03:41.988790 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-05 20:03:41.988800 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-05 20:03:41.988807 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.988814 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-05 20:03:41.988821 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-05 20:03:41.988829 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.988839 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-05 20:03:41.988849 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-05 20:03:41.988857 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.988865 | orchestrator | 2025-06-05 20:03:41.988871 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-05 20:03:41.988878 | orchestrator | 2025-06-05 20:03:41.988884 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-05 20:03:41.988891 | orchestrator | Thursday 05 June 2025 20:03:40 +0000 (0:00:00.701) 0:08:31.000 ********* 2025-06-05 20:03:41.988898 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.988904 | orchestrator | 2025-06-05 20:03:41.988912 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-05 20:03:41.988918 | orchestrator | 2025-06-05 20:03:41.988925 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-05 20:03:41.988938 | orchestrator | Thursday 05 June 2025 20:03:41 +0000 (0:00:00.647) 0:08:31.647 ********* 2025-06-05 20:03:41.988944 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:03:41.988950 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:03:41.988955 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:03:41.988961 | orchestrator | 2025-06-05 20:03:41.988968 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:03:41.988974 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 20:03:41.988983 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-05 20:03:41.988990 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-05 20:03:41.989004 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-05 20:03:41.989012 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-05 20:03:41.989019 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-05 20:03:41.989026 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-05 20:03:41.989032 | orchestrator | 2025-06-05 20:03:41.989039 | orchestrator | 2025-06-05 20:03:41.989046 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:03:41.989053 | orchestrator | Thursday 05 June 2025 20:03:41 +0000 (0:00:00.431) 0:08:32.079 ********* 2025-06-05 20:03:41.989059 | orchestrator | =============================================================================== 2025-06-05 20:03:41.989073 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.28s 2025-06-05 20:03:41.989080 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.80s 2025-06-05 20:03:41.989087 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.02s 2025-06-05 20:03:41.989094 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.01s 2025-06-05 20:03:41.989101 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.85s 2025-06-05 20:03:41.989109 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.36s 2025-06-05 20:03:41.989116 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.68s 2025-06-05 20:03:41.989122 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.26s 2025-06-05 20:03:41.989130 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 16.71s 2025-06-05 20:03:41.989137 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.37s 2025-06-05 20:03:41.989144 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.90s 2025-06-05 20:03:41.989150 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.17s 2025-06-05 20:03:41.989157 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.16s 2025-06-05 20:03:41.989164 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 12.15s 2025-06-05 20:03:41.989171 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.79s 2025-06-05 20:03:41.989178 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.61s 2025-06-05 20:03:41.989185 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.77s 2025-06-05 20:03:41.989192 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.73s 2025-06-05 20:03:41.989199 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.46s 2025-06-05 20:03:41.989206 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 7.03s 2025-06-05 20:03:41.989213 | orchestrator | 2025-06-05 20:03:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:03:45.024992 | orchestrator | 2025-06-05 20:03:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:03:48.059226 | orchestrator | 2025-06-05 20:03:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:03:51.110276 | orchestrator | 2025-06-05 20:03:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:03:54.152341 | orchestrator | 2025-06-05 20:03:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:03:57.193798 | orchestrator | 2025-06-05 20:03:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:00.242132 | orchestrator | 2025-06-05 20:04:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:03.284816 | orchestrator | 2025-06-05 20:04:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:06.329322 | orchestrator | 2025-06-05 20:04:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:09.369243 | orchestrator | 2025-06-05 20:04:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:12.412431 | orchestrator | 2025-06-05 20:04:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:15.455482 | orchestrator | 2025-06-05 20:04:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:18.501258 | orchestrator | 2025-06-05 20:04:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:21.536749 | orchestrator | 2025-06-05 20:04:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:24.579699 | orchestrator | 2025-06-05 20:04:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:27.623826 | orchestrator | 2025-06-05 20:04:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:30.664045 | orchestrator | 2025-06-05 20:04:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:33.708752 | orchestrator | 2025-06-05 20:04:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:36.750002 | orchestrator | 2025-06-05 20:04:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:39.793829 | orchestrator | 2025-06-05 20:04:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-05 20:04:42.831633 | orchestrator | 2025-06-05 20:04:43.113201 | orchestrator | 2025-06-05 20:04:43.116611 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Jun 5 20:04:43 UTC 2025 2025-06-05 20:04:43.116647 | orchestrator | 2025-06-05 20:04:43.539434 | orchestrator | ok: Runtime: 0:34:12.924063 2025-06-05 20:04:43.810079 | 2025-06-05 20:04:43.810219 | TASK [Bootstrap services] 2025-06-05 20:04:44.537408 | orchestrator | 2025-06-05 20:04:44.537561 | orchestrator | # BOOTSTRAP 2025-06-05 20:04:44.537574 | orchestrator | 2025-06-05 20:04:44.537583 | orchestrator | + set -e 2025-06-05 20:04:44.537591 | orchestrator | + echo 2025-06-05 20:04:44.537600 | orchestrator | + echo '# BOOTSTRAP' 2025-06-05 20:04:44.537612 | orchestrator | + echo 2025-06-05 20:04:44.537644 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-05 20:04:44.552103 | orchestrator | + set -e 2025-06-05 20:04:44.552183 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-05 20:04:48.218495 | orchestrator | 2025-06-05 20:04:48 | INFO  | It takes a moment until task f6c8bc6f-abf2-4a06-a4e2-16e302787357 (flavor-manager) has been started and output is visible here. 2025-06-05 20:04:51.784242 | orchestrator | 2025-06-05 20:04:51 | INFO  | Flavor SCS-1V-4 created 2025-06-05 20:04:52.221190 | orchestrator | 2025-06-05 20:04:52 | INFO  | Flavor SCS-2V-8 created 2025-06-05 20:04:52.615055 | orchestrator | 2025-06-05 20:04:52 | INFO  | Flavor SCS-4V-16 created 2025-06-05 20:04:52.780449 | orchestrator | 2025-06-05 20:04:52 | INFO  | Flavor SCS-8V-32 created 2025-06-05 20:04:52.929370 | orchestrator | 2025-06-05 20:04:52 | INFO  | Flavor SCS-1V-2 created 2025-06-05 20:04:53.073627 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-2V-4 created 2025-06-05 20:04:53.208435 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-4V-8 created 2025-06-05 20:04:53.349541 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-8V-16 created 2025-06-05 20:04:53.484485 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-16V-32 created 2025-06-05 20:04:53.618172 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-1V-8 created 2025-06-05 20:04:53.761217 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-2V-16 created 2025-06-05 20:04:53.901268 | orchestrator | 2025-06-05 20:04:53 | INFO  | Flavor SCS-4V-32 created 2025-06-05 20:04:54.035848 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-1L-1 created 2025-06-05 20:04:54.184849 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-2V-4-20s created 2025-06-05 20:04:54.333501 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-4V-16-100s created 2025-06-05 20:04:54.476676 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-1V-4-10 created 2025-06-05 20:04:54.617206 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-2V-8-20 created 2025-06-05 20:04:54.764044 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-4V-16-50 created 2025-06-05 20:04:54.901322 | orchestrator | 2025-06-05 20:04:54 | INFO  | Flavor SCS-8V-32-100 created 2025-06-05 20:04:55.047677 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-1V-2-5 created 2025-06-05 20:04:55.156665 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-2V-4-10 created 2025-06-05 20:04:55.286487 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-4V-8-20 created 2025-06-05 20:04:55.427956 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-8V-16-50 created 2025-06-05 20:04:55.560190 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-16V-32-100 created 2025-06-05 20:04:55.684497 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-1V-8-20 created 2025-06-05 20:04:55.812038 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-2V-16-50 created 2025-06-05 20:04:55.950743 | orchestrator | 2025-06-05 20:04:55 | INFO  | Flavor SCS-4V-32-100 created 2025-06-05 20:04:56.090621 | orchestrator | 2025-06-05 20:04:56 | INFO  | Flavor SCS-1L-1-5 created 2025-06-05 20:04:58.198263 | orchestrator | 2025-06-05 20:04:58 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-05 20:04:58.203043 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:04:58.203159 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:04:58.203244 | orchestrator | Registering Redlock._release_script 2025-06-05 20:04:58.259954 | orchestrator | 2025-06-05 20:04:58 | INFO  | Task 485067ff-aa77-4801-877f-8567a1e78dae (bootstrap-basic) was prepared for execution. 2025-06-05 20:04:58.260062 | orchestrator | 2025-06-05 20:04:58 | INFO  | It takes a moment until task 485067ff-aa77-4801-877f-8567a1e78dae (bootstrap-basic) has been started and output is visible here. 2025-06-05 20:05:02.281974 | orchestrator | 2025-06-05 20:05:02.283426 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-05 20:05:02.285891 | orchestrator | 2025-06-05 20:05:02.286494 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-05 20:05:02.287026 | orchestrator | Thursday 05 June 2025 20:05:02 +0000 (0:00:00.078) 0:00:00.078 ********* 2025-06-05 20:05:04.068990 | orchestrator | ok: [localhost] 2025-06-05 20:05:04.069108 | orchestrator | 2025-06-05 20:05:04.069485 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-05 20:05:04.069664 | orchestrator | Thursday 05 June 2025 20:05:04 +0000 (0:00:01.788) 0:00:01.866 ********* 2025-06-05 20:05:11.960564 | orchestrator | ok: [localhost] 2025-06-05 20:05:11.960957 | orchestrator | 2025-06-05 20:05:11.963136 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-05 20:05:11.964242 | orchestrator | Thursday 05 June 2025 20:05:11 +0000 (0:00:07.888) 0:00:09.754 ********* 2025-06-05 20:05:19.520590 | orchestrator | changed: [localhost] 2025-06-05 20:05:19.521099 | orchestrator | 2025-06-05 20:05:19.521610 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-05 20:05:19.522764 | orchestrator | Thursday 05 June 2025 20:05:19 +0000 (0:00:07.562) 0:00:17.316 ********* 2025-06-05 20:05:26.644221 | orchestrator | ok: [localhost] 2025-06-05 20:05:26.644332 | orchestrator | 2025-06-05 20:05:26.645053 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-05 20:05:26.646140 | orchestrator | Thursday 05 June 2025 20:05:26 +0000 (0:00:07.124) 0:00:24.440 ********* 2025-06-05 20:05:33.121517 | orchestrator | changed: [localhost] 2025-06-05 20:05:33.121970 | orchestrator | 2025-06-05 20:05:33.122511 | orchestrator | TASK [Create public network] *************************************************** 2025-06-05 20:05:33.123856 | orchestrator | Thursday 05 June 2025 20:05:33 +0000 (0:00:06.477) 0:00:30.918 ********* 2025-06-05 20:05:40.409123 | orchestrator | changed: [localhost] 2025-06-05 20:05:40.409253 | orchestrator | 2025-06-05 20:05:40.409291 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-05 20:05:40.409649 | orchestrator | Thursday 05 June 2025 20:05:40 +0000 (0:00:07.281) 0:00:38.200 ********* 2025-06-05 20:05:47.950631 | orchestrator | changed: [localhost] 2025-06-05 20:05:47.950779 | orchestrator | 2025-06-05 20:05:47.950998 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-05 20:05:47.952260 | orchestrator | Thursday 05 June 2025 20:05:47 +0000 (0:00:07.545) 0:00:45.745 ********* 2025-06-05 20:05:52.546399 | orchestrator | changed: [localhost] 2025-06-05 20:05:52.546862 | orchestrator | 2025-06-05 20:05:52.547322 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-05 20:05:52.548262 | orchestrator | Thursday 05 June 2025 20:05:52 +0000 (0:00:04.598) 0:00:50.343 ********* 2025-06-05 20:05:57.184626 | orchestrator | changed: [localhost] 2025-06-05 20:05:57.186737 | orchestrator | 2025-06-05 20:05:57.187261 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-05 20:05:57.188312 | orchestrator | Thursday 05 June 2025 20:05:57 +0000 (0:00:04.638) 0:00:54.981 ********* 2025-06-05 20:06:00.743478 | orchestrator | ok: [localhost] 2025-06-05 20:06:00.745926 | orchestrator | 2025-06-05 20:06:00.746823 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:06:00.746854 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 20:06:00.746866 | orchestrator | 2025-06-05 20:06:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 20:06:00.746922 | orchestrator | 2025-06-05 20:06:00 | INFO  | Please wait and do not abort execution. 2025-06-05 20:06:00.746973 | orchestrator | 2025-06-05 20:06:00.747286 | orchestrator | 2025-06-05 20:06:00.749993 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:06:00.750087 | orchestrator | Thursday 05 June 2025 20:06:00 +0000 (0:00:03.560) 0:00:58.541 ********* 2025-06-05 20:06:00.753063 | orchestrator | =============================================================================== 2025-06-05 20:06:00.753120 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.89s 2025-06-05 20:06:00.753285 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.56s 2025-06-05 20:06:00.753555 | orchestrator | Set public network to default ------------------------------------------- 7.55s 2025-06-05 20:06:00.753965 | orchestrator | Create public network --------------------------------------------------- 7.28s 2025-06-05 20:06:00.754133 | orchestrator | Get volume type local --------------------------------------------------- 7.12s 2025-06-05 20:06:00.754791 | orchestrator | Create volume type local ------------------------------------------------ 6.48s 2025-06-05 20:06:00.757779 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.64s 2025-06-05 20:06:00.757841 | orchestrator | Create public subnet ---------------------------------------------------- 4.60s 2025-06-05 20:06:00.757860 | orchestrator | Create manager role ----------------------------------------------------- 3.56s 2025-06-05 20:06:00.758387 | orchestrator | Gathering Facts --------------------------------------------------------- 1.79s 2025-06-05 20:06:03.056643 | orchestrator | 2025-06-05 20:06:03 | INFO  | It takes a moment until task a028617b-3b2f-4f63-90cd-5ba7337002d4 (image-manager) has been started and output is visible here. 2025-06-05 20:06:06.442213 | orchestrator | 2025-06-05 20:06:06 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-05 20:06:06.688397 | orchestrator | 2025-06-05 20:06:06 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-05 20:06:06.689104 | orchestrator | 2025-06-05 20:06:06 | INFO  | Importing image Cirros 0.6.2 2025-06-05 20:06:06.690071 | orchestrator | 2025-06-05 20:06:06 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-05 20:06:08.526948 | orchestrator | 2025-06-05 20:06:08 | INFO  | Waiting for image to leave queued state... 2025-06-05 20:06:10.573234 | orchestrator | 2025-06-05 20:06:10 | INFO  | Waiting for import to complete... 2025-06-05 20:06:20.726391 | orchestrator | 2025-06-05 20:06:20 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-05 20:06:20.916808 | orchestrator | 2025-06-05 20:06:20 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-05 20:06:20.917137 | orchestrator | 2025-06-05 20:06:20 | INFO  | Setting internal_version = 0.6.2 2025-06-05 20:06:20.918479 | orchestrator | 2025-06-05 20:06:20 | INFO  | Setting image_original_user = cirros 2025-06-05 20:06:20.919244 | orchestrator | 2025-06-05 20:06:20 | INFO  | Adding tag os:cirros 2025-06-05 20:06:21.167498 | orchestrator | 2025-06-05 20:06:21 | INFO  | Setting property architecture: x86_64 2025-06-05 20:06:21.469741 | orchestrator | 2025-06-05 20:06:21 | INFO  | Setting property hw_disk_bus: scsi 2025-06-05 20:06:21.721644 | orchestrator | 2025-06-05 20:06:21 | INFO  | Setting property hw_rng_model: virtio 2025-06-05 20:06:21.942538 | orchestrator | 2025-06-05 20:06:21 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-05 20:06:22.206491 | orchestrator | 2025-06-05 20:06:22 | INFO  | Setting property hw_watchdog_action: reset 2025-06-05 20:06:22.414751 | orchestrator | 2025-06-05 20:06:22 | INFO  | Setting property hypervisor_type: qemu 2025-06-05 20:06:22.653728 | orchestrator | 2025-06-05 20:06:22 | INFO  | Setting property os_distro: cirros 2025-06-05 20:06:22.894239 | orchestrator | 2025-06-05 20:06:22 | INFO  | Setting property replace_frequency: never 2025-06-05 20:06:23.113506 | orchestrator | 2025-06-05 20:06:23 | INFO  | Setting property uuid_validity: none 2025-06-05 20:06:23.314089 | orchestrator | 2025-06-05 20:06:23 | INFO  | Setting property provided_until: none 2025-06-05 20:06:23.530001 | orchestrator | 2025-06-05 20:06:23 | INFO  | Setting property image_description: Cirros 2025-06-05 20:06:23.731753 | orchestrator | 2025-06-05 20:06:23 | INFO  | Setting property image_name: Cirros 2025-06-05 20:06:23.968255 | orchestrator | 2025-06-05 20:06:23 | INFO  | Setting property internal_version: 0.6.2 2025-06-05 20:06:24.185349 | orchestrator | 2025-06-05 20:06:24 | INFO  | Setting property image_original_user: cirros 2025-06-05 20:06:24.399724 | orchestrator | 2025-06-05 20:06:24 | INFO  | Setting property os_version: 0.6.2 2025-06-05 20:06:24.619049 | orchestrator | 2025-06-05 20:06:24 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-05 20:06:24.844573 | orchestrator | 2025-06-05 20:06:24 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-05 20:06:25.083758 | orchestrator | 2025-06-05 20:06:25 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-05 20:06:25.084020 | orchestrator | 2025-06-05 20:06:25 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-05 20:06:25.084893 | orchestrator | 2025-06-05 20:06:25 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-05 20:06:25.281062 | orchestrator | 2025-06-05 20:06:25 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-05 20:06:25.474892 | orchestrator | 2025-06-05 20:06:25 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-05 20:06:25.476122 | orchestrator | 2025-06-05 20:06:25 | INFO  | Importing image Cirros 0.6.3 2025-06-05 20:06:25.476185 | orchestrator | 2025-06-05 20:06:25 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-05 20:06:26.642304 | orchestrator | 2025-06-05 20:06:26 | INFO  | Waiting for image to leave queued state... 2025-06-05 20:06:28.695269 | orchestrator | 2025-06-05 20:06:28 | INFO  | Waiting for import to complete... 2025-06-05 20:06:38.832102 | orchestrator | 2025-06-05 20:06:38 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-05 20:06:39.107345 | orchestrator | 2025-06-05 20:06:39 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-05 20:06:39.108105 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting internal_version = 0.6.3 2025-06-05 20:06:39.108739 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting image_original_user = cirros 2025-06-05 20:06:39.109599 | orchestrator | 2025-06-05 20:06:39 | INFO  | Adding tag os:cirros 2025-06-05 20:06:39.347946 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting property architecture: x86_64 2025-06-05 20:06:39.562410 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting property hw_disk_bus: scsi 2025-06-05 20:06:39.785399 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting property hw_rng_model: virtio 2025-06-05 20:06:39.996091 | orchestrator | 2025-06-05 20:06:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-05 20:06:40.241320 | orchestrator | 2025-06-05 20:06:40 | INFO  | Setting property hw_watchdog_action: reset 2025-06-05 20:06:40.451708 | orchestrator | 2025-06-05 20:06:40 | INFO  | Setting property hypervisor_type: qemu 2025-06-05 20:06:40.684065 | orchestrator | 2025-06-05 20:06:40 | INFO  | Setting property os_distro: cirros 2025-06-05 20:06:40.945892 | orchestrator | 2025-06-05 20:06:40 | INFO  | Setting property replace_frequency: never 2025-06-05 20:06:41.166923 | orchestrator | 2025-06-05 20:06:41 | INFO  | Setting property uuid_validity: none 2025-06-05 20:06:41.392428 | orchestrator | 2025-06-05 20:06:41 | INFO  | Setting property provided_until: none 2025-06-05 20:06:41.644254 | orchestrator | 2025-06-05 20:06:41 | INFO  | Setting property image_description: Cirros 2025-06-05 20:06:41.879370 | orchestrator | 2025-06-05 20:06:41 | INFO  | Setting property image_name: Cirros 2025-06-05 20:06:42.135633 | orchestrator | 2025-06-05 20:06:42 | INFO  | Setting property internal_version: 0.6.3 2025-06-05 20:06:42.356513 | orchestrator | 2025-06-05 20:06:42 | INFO  | Setting property image_original_user: cirros 2025-06-05 20:06:42.599422 | orchestrator | 2025-06-05 20:06:42 | INFO  | Setting property os_version: 0.6.3 2025-06-05 20:06:42.824353 | orchestrator | 2025-06-05 20:06:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-05 20:06:43.029526 | orchestrator | 2025-06-05 20:06:43 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-05 20:06:43.458560 | orchestrator | 2025-06-05 20:06:43 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-05 20:06:43.459311 | orchestrator | 2025-06-05 20:06:43 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-05 20:06:43.460261 | orchestrator | 2025-06-05 20:06:43 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-05 20:06:44.938867 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-05 20:06:46.773543 | orchestrator | 2025-06-05 20:06:46 | INFO  | date: 2025-06-05 2025-06-05 20:06:46.773649 | orchestrator | 2025-06-05 20:06:46 | INFO  | image: octavia-amphora-haproxy-2024.2.20250605.qcow2 2025-06-05 20:06:46.773670 | orchestrator | 2025-06-05 20:06:46 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250605.qcow2 2025-06-05 20:06:46.773706 | orchestrator | 2025-06-05 20:06:46 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250605.qcow2.CHECKSUM 2025-06-05 20:06:46.816356 | orchestrator | 2025-06-05 20:06:46 | INFO  | checksum: d20fe80ea6279e9425b973206d45e035c996948b62c82c9510b47e468f434d44 2025-06-05 20:06:46.885598 | orchestrator | 2025-06-05 20:06:46 | INFO  | It takes a moment until task 7c21e93f-6b16-435d-8ef7-6e6d518ac6c9 (image-manager) has been started and output is visible here. 2025-06-05 20:06:47.124294 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-05 20:06:47.124862 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-05 20:06:49.425597 | orchestrator | 2025-06-05 20:06:49 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-05' 2025-06-05 20:06:49.438642 | orchestrator | 2025-06-05 20:06:49 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250605.qcow2: 200 2025-06-05 20:06:49.438765 | orchestrator | 2025-06-05 20:06:49 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-05 2025-06-05 20:06:49.438908 | orchestrator | 2025-06-05 20:06:49 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250605.qcow2 2025-06-05 20:06:49.945776 | orchestrator | 2025-06-05 20:06:49 | INFO  | Waiting for image to leave queued state... 2025-06-05 20:06:51.994362 | orchestrator | 2025-06-05 20:06:51 | INFO  | Waiting for import to complete... 2025-06-05 20:07:02.311947 | orchestrator | 2025-06-05 20:07:02 | INFO  | Waiting for import to complete... 2025-06-05 20:07:12.429937 | orchestrator | 2025-06-05 20:07:12 | INFO  | Waiting for import to complete... 2025-06-05 20:07:22.522789 | orchestrator | 2025-06-05 20:07:22 | INFO  | Waiting for import to complete... 2025-06-05 20:07:32.614371 | orchestrator | 2025-06-05 20:07:32 | INFO  | Waiting for import to complete... 2025-06-05 20:07:42.737800 | orchestrator | 2025-06-05 20:07:42 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-05' successfully completed, reloading images 2025-06-05 20:07:43.071657 | orchestrator | 2025-06-05 20:07:43 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-05' 2025-06-05 20:07:43.072538 | orchestrator | 2025-06-05 20:07:43 | INFO  | Setting internal_version = 2025-06-05 2025-06-05 20:07:43.073139 | orchestrator | 2025-06-05 20:07:43 | INFO  | Setting image_original_user = ubuntu 2025-06-05 20:07:43.073866 | orchestrator | 2025-06-05 20:07:43 | INFO  | Adding tag amphora 2025-06-05 20:07:43.320769 | orchestrator | 2025-06-05 20:07:43 | INFO  | Adding tag os:ubuntu 2025-06-05 20:07:43.513021 | orchestrator | 2025-06-05 20:07:43 | INFO  | Setting property architecture: x86_64 2025-06-05 20:07:43.712672 | orchestrator | 2025-06-05 20:07:43 | INFO  | Setting property hw_disk_bus: scsi 2025-06-05 20:07:43.940235 | orchestrator | 2025-06-05 20:07:43 | INFO  | Setting property hw_rng_model: virtio 2025-06-05 20:07:44.146664 | orchestrator | 2025-06-05 20:07:44 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-05 20:07:44.358360 | orchestrator | 2025-06-05 20:07:44 | INFO  | Setting property hw_watchdog_action: reset 2025-06-05 20:07:44.566207 | orchestrator | 2025-06-05 20:07:44 | INFO  | Setting property hypervisor_type: qemu 2025-06-05 20:07:44.759167 | orchestrator | 2025-06-05 20:07:44 | INFO  | Setting property os_distro: ubuntu 2025-06-05 20:07:44.948052 | orchestrator | 2025-06-05 20:07:44 | INFO  | Setting property replace_frequency: quarterly 2025-06-05 20:07:45.171500 | orchestrator | 2025-06-05 20:07:45 | INFO  | Setting property uuid_validity: last-1 2025-06-05 20:07:45.392108 | orchestrator | 2025-06-05 20:07:45 | INFO  | Setting property provided_until: none 2025-06-05 20:07:45.558276 | orchestrator | 2025-06-05 20:07:45 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-05 20:07:45.783463 | orchestrator | 2025-06-05 20:07:45 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-05 20:07:46.010277 | orchestrator | 2025-06-05 20:07:46 | INFO  | Setting property internal_version: 2025-06-05 2025-06-05 20:07:46.243662 | orchestrator | 2025-06-05 20:07:46 | INFO  | Setting property image_original_user: ubuntu 2025-06-05 20:07:46.449722 | orchestrator | 2025-06-05 20:07:46 | INFO  | Setting property os_version: 2025-06-05 2025-06-05 20:07:46.689583 | orchestrator | 2025-06-05 20:07:46 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250605.qcow2 2025-06-05 20:07:46.947247 | orchestrator | 2025-06-05 20:07:46 | INFO  | Setting property image_build_date: 2025-06-05 2025-06-05 20:07:47.200252 | orchestrator | 2025-06-05 20:07:47 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-05' 2025-06-05 20:07:47.200375 | orchestrator | 2025-06-05 20:07:47 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-05' 2025-06-05 20:07:47.365114 | orchestrator | 2025-06-05 20:07:47 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-05 20:07:47.365924 | orchestrator | 2025-06-05 20:07:47 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-05 20:07:47.366590 | orchestrator | 2025-06-05 20:07:47 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-05 20:07:47.366964 | orchestrator | 2025-06-05 20:07:47 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-05 20:07:47.978598 | orchestrator | ok: Runtime: 0:03:03.648324 2025-06-05 20:07:48.033124 | 2025-06-05 20:07:48.033232 | TASK [Run checks] 2025-06-05 20:07:48.720523 | orchestrator | + set -e 2025-06-05 20:07:48.720729 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 20:07:48.720754 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 20:07:48.720777 | orchestrator | ++ INTERACTIVE=false 2025-06-05 20:07:48.720791 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 20:07:48.720803 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 20:07:48.720818 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-05 20:07:48.721832 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-05 20:07:48.728926 | orchestrator | 2025-06-05 20:07:48.728982 | orchestrator | # CHECK 2025-06-05 20:07:48.728994 | orchestrator | 2025-06-05 20:07:48.729006 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 20:07:48.729022 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 20:07:48.729037 | orchestrator | + echo 2025-06-05 20:07:48.729056 | orchestrator | + echo '# CHECK' 2025-06-05 20:07:48.729081 | orchestrator | + echo 2025-06-05 20:07:48.729114 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-05 20:07:48.730080 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-05 20:07:48.788681 | orchestrator | 2025-06-05 20:07:48.788770 | orchestrator | ## Containers @ testbed-manager 2025-06-05 20:07:48.788781 | orchestrator | 2025-06-05 20:07:48.788792 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-05 20:07:48.788800 | orchestrator | + echo 2025-06-05 20:07:48.788808 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-05 20:07:48.788815 | orchestrator | + echo 2025-06-05 20:07:48.788822 | orchestrator | + osism container testbed-manager ps 2025-06-05 20:07:50.887733 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-05 20:07:50.887938 | orchestrator | 27f92fe9a660 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_blackbox_exporter 2025-06-05 20:07:50.887967 | orchestrator | 4d96b69ece6a registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-06-05 20:07:50.887988 | orchestrator | a699145b7529 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-05 20:07:50.888000 | orchestrator | d1f104ebfb63 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-05 20:07:50.888012 | orchestrator | 4d65f914730e registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_server 2025-06-05 20:07:50.888024 | orchestrator | fd16705fd351 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-06-05 20:07:50.888040 | orchestrator | d16915fc5aee registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-05 20:07:50.888053 | orchestrator | 7b214335c117 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-05 20:07:50.888064 | orchestrator | d89c436d2f31 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-05 20:07:50.888103 | orchestrator | e43f594e4c02 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 30 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-05 20:07:50.888115 | orchestrator | bda85d91d18c registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-06-05 20:07:50.888127 | orchestrator | 98e24f29294f registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-06-05 20:07:50.888139 | orchestrator | e7cb507252da registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 51 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-05 20:07:50.888157 | orchestrator | 0b8e32caa92f registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 55 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-06-05 20:07:50.888191 | orchestrator | 2e915b649fba registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-06-05 20:07:50.888203 | orchestrator | 8eee1ecf6ad6 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-06-05 20:07:50.888215 | orchestrator | 939e5be6e832 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-ansible 2025-06-05 20:07:50.888226 | orchestrator | cb6b11a91158 registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-06-05 20:07:50.888238 | orchestrator | b5dd227663f5 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 55 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-05 20:07:50.888249 | orchestrator | f1d11e8dff3e registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 55 minutes ago Up 38 minutes (healthy) osismclient 2025-06-05 20:07:50.888261 | orchestrator | 8d11782b5268 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-05 20:07:50.888272 | orchestrator | c7ccebd46dfe registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 55 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-06-05 20:07:50.888284 | orchestrator | 2ff71f60d846 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-06-05 20:07:50.888304 | orchestrator | dda23dcfac9b registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-06-05 20:07:50.888316 | orchestrator | 0555693754cb registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-06-05 20:07:50.888327 | orchestrator | a093e32bd7f8 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-06-05 20:07:50.888339 | orchestrator | 091c6604be3b registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 55 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-05 20:07:50.888351 | orchestrator | 58d332ae7dfb registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 56 minutes ago Up 56 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-05 20:07:51.130597 | orchestrator | 2025-06-05 20:07:51.130713 | orchestrator | ## Images @ testbed-manager 2025-06-05 20:07:51.130730 | orchestrator | 2025-06-05 20:07:51.130742 | orchestrator | + echo 2025-06-05 20:07:51.130754 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-05 20:07:51.130767 | orchestrator | + echo 2025-06-05 20:07:51.130778 | orchestrator | + osism container testbed-manager images 2025-06-05 20:07:53.120465 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-05 20:07:53.120580 | orchestrator | registry.osism.tech/osism/homer v25.05.2 c4b11f59ed93 17 hours ago 11.5MB 2025-06-05 20:07:53.120600 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 d511176028d8 17 hours ago 226MB 2025-06-05 20:07:53.120610 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 3 days ago 574MB 2025-06-05 20:07:53.120620 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 4 days ago 578MB 2025-06-05 20:07:53.120654 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 days ago 319MB 2025-06-05 20:07:53.120665 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 days ago 747MB 2025-06-05 20:07:53.120675 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 days ago 629MB 2025-06-05 20:07:53.120685 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 5 days ago 892MB 2025-06-05 20:07:53.120694 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 5 days ago 361MB 2025-06-05 20:07:53.120704 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 days ago 411MB 2025-06-05 20:07:53.120714 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 days ago 359MB 2025-06-05 20:07:53.120723 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 5 days ago 457MB 2025-06-05 20:07:53.120733 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 5 days ago 538MB 2025-06-05 20:07:53.120765 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 5 days ago 1.21GB 2025-06-05 20:07:53.120776 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 5 days ago 308MB 2025-06-05 20:07:53.120786 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 6 days ago 297MB 2025-06-05 20:07:53.120795 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 7 days ago 41.4MB 2025-06-05 20:07:53.120805 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 9 days ago 224MB 2025-06-05 20:07:53.120814 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 3 weeks ago 453MB 2025-06-05 20:07:53.120824 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 3 months ago 328MB 2025-06-05 20:07:53.120833 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-05 20:07:53.120870 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-05 20:07:53.120881 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-05 20:07:53.355308 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-05 20:07:53.356363 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-05 20:07:53.415456 | orchestrator | 2025-06-05 20:07:53.415611 | orchestrator | ## Containers @ testbed-node-0 2025-06-05 20:07:53.415659 | orchestrator | 2025-06-05 20:07:53.415674 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-05 20:07:53.415712 | orchestrator | + echo 2025-06-05 20:07:53.415726 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-05 20:07:53.415738 | orchestrator | + echo 2025-06-05 20:07:53.415777 | orchestrator | + osism container testbed-node-0 ps 2025-06-05 20:07:55.536802 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-05 20:07:55.536958 | orchestrator | 4d3469ec9aaf registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-05 20:07:55.536977 | orchestrator | 70050f7b3c51 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-05 20:07:55.536989 | orchestrator | 25e4421af91c registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-05 20:07:55.537001 | orchestrator | a4a5411cc2a9 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-05 20:07:55.537012 | orchestrator | c6c5a10dc68f registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-05 20:07:55.537023 | orchestrator | bb5060e9f4fc registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-05 20:07:55.537034 | orchestrator | 8a3d583974ce registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-05 20:07:55.537064 | orchestrator | 626a52107f4d registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-05 20:07:55.537075 | orchestrator | d5f907ef9388 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-05 20:07:55.537110 | orchestrator | dce706af8b06 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-05 20:07:55.537122 | orchestrator | 48eaea59a96a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-05 20:07:55.537133 | orchestrator | d0320b7d9095 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-05 20:07:55.537144 | orchestrator | d528334300b0 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-05 20:07:55.537155 | orchestrator | 560d0e5008a9 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-05 20:07:55.537166 | orchestrator | dd217f7695b3 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-05 20:07:55.537177 | orchestrator | 68f033fd8694 registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-05 20:07:55.537188 | orchestrator | b19e5a75f588 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-05 20:07:55.537199 | orchestrator | d3186264a713 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 14 minutes (healthy) designate_mdns 2025-06-05 20:07:55.537210 | orchestrator | 338d920a5354 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-05 20:07:55.537242 | orchestrator | 71461a99cf40 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-05 20:07:55.537254 | orchestrator | 0113ea63a2e5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-05 20:07:55.537265 | orchestrator | 9747668d5248 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-05 20:07:55.537276 | orchestrator | 74bd0f18821b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-05 20:07:55.537287 | orchestrator | 794642785a40 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-05 20:07:55.537298 | orchestrator | 67fc6ce1cf43 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-05 20:07:55.537309 | orchestrator | f6bc75772c04 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-06-05 20:07:55.537325 | orchestrator | 4404697fa6c2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-05 20:07:55.537343 | orchestrator | aa1ac5406a68 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-05 20:07:55.537355 | orchestrator | 7a92d8c54b91 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-05 20:07:55.537366 | orchestrator | 2522bb3fd53c registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-05 20:07:55.537382 | orchestrator | e54664fb143c registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-05 20:07:55.537393 | orchestrator | ca1e355efb36 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-05 20:07:55.537404 | orchestrator | 10b65f285396 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-05 20:07:55.537414 | orchestrator | ad97261c62af registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-05 20:07:55.537425 | orchestrator | 64ab162329d3 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-05 20:07:55.537436 | orchestrator | 23badebae58a registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-05 20:07:55.537447 | orchestrator | e2ff2513fc5b registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-05 20:07:55.537458 | orchestrator | 917c681a633f registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-05 20:07:55.537474 | orchestrator | 2ab140ee69cf registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-05 20:07:55.537485 | orchestrator | 6b53fba59a98 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-05 20:07:55.537501 | orchestrator | a31bb2ee33b2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-05 20:07:55.537513 | orchestrator | e5d8124af8c1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-06-05 20:07:55.537524 | orchestrator | 32f984efdf70 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-05 20:07:55.537535 | orchestrator | 14b0d0d8edab registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-05 20:07:55.537546 | orchestrator | 18561bfc7d0c registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-05 20:07:55.537563 | orchestrator | e47d20018b28 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-05 20:07:55.537574 | orchestrator | b884bd38a8c2 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-05 20:07:55.537585 | orchestrator | ecd7d132391d registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-05 20:07:55.537596 | orchestrator | 035478b23c95 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-05 20:07:55.537607 | orchestrator | b8b34a37e3d4 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-05 20:07:55.537618 | orchestrator | 4dc84ed03189 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-05 20:07:55.537629 | orchestrator | fce15cb1140c registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-05 20:07:55.759816 | orchestrator | 2025-06-05 20:07:55.759953 | orchestrator | ## Images @ testbed-node-0 2025-06-05 20:07:55.759972 | orchestrator | 2025-06-05 20:07:55.759984 | orchestrator | + echo 2025-06-05 20:07:55.759997 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-05 20:07:55.760009 | orchestrator | + echo 2025-06-05 20:07:55.760021 | orchestrator | + osism container testbed-node-0 images 2025-06-05 20:07:57.774886 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-05 20:07:57.774997 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 days ago 319MB 2025-06-05 20:07:57.775011 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 days ago 319MB 2025-06-05 20:07:57.775022 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 days ago 330MB 2025-06-05 20:07:57.775032 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 days ago 1.59GB 2025-06-05 20:07:57.775042 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 days ago 1.55GB 2025-06-05 20:07:57.775051 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 days ago 419MB 2025-06-05 20:07:57.775061 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 days ago 747MB 2025-06-05 20:07:57.775071 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 days ago 376MB 2025-06-05 20:07:57.775081 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 days ago 327MB 2025-06-05 20:07:57.775090 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 days ago 629MB 2025-06-05 20:07:57.775100 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 days ago 1.01GB 2025-06-05 20:07:57.775110 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 days ago 591MB 2025-06-05 20:07:57.775119 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 days ago 354MB 2025-06-05 20:07:57.775153 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 days ago 352MB 2025-06-05 20:07:57.775164 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 days ago 411MB 2025-06-05 20:07:57.775173 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 days ago 345MB 2025-06-05 20:07:57.775183 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 days ago 359MB 2025-06-05 20:07:57.775193 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 days ago 326MB 2025-06-05 20:07:57.775203 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 days ago 325MB 2025-06-05 20:07:57.775229 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 days ago 1.21GB 2025-06-05 20:07:57.775239 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 days ago 362MB 2025-06-05 20:07:57.775249 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 days ago 362MB 2025-06-05 20:07:57.775259 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 days ago 1.15GB 2025-06-05 20:07:57.775268 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 days ago 1.04GB 2025-06-05 20:07:57.775278 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 days ago 1.25GB 2025-06-05 20:07:57.775287 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 5 days ago 1.04GB 2025-06-05 20:07:57.775297 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 5 days ago 1.04GB 2025-06-05 20:07:57.775306 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 5 days ago 1.04GB 2025-06-05 20:07:57.775316 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 5 days ago 1.04GB 2025-06-05 20:07:57.775326 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 days ago 1.2GB 2025-06-05 20:07:57.775335 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 days ago 1.31GB 2025-06-05 20:07:57.775369 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 5 days ago 1.12GB 2025-06-05 20:07:57.775380 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 5 days ago 1.12GB 2025-06-05 20:07:57.775391 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 5 days ago 1.1GB 2025-06-05 20:07:57.775403 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 5 days ago 1.1GB 2025-06-05 20:07:57.775415 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 5 days ago 1.1GB 2025-06-05 20:07:57.775426 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 days ago 1.41GB 2025-06-05 20:07:57.775437 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 days ago 1.41GB 2025-06-05 20:07:57.775449 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 days ago 1.06GB 2025-06-05 20:07:57.775460 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 days ago 1.06GB 2025-06-05 20:07:57.775478 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 days ago 1.05GB 2025-06-05 20:07:57.775490 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 days ago 1.05GB 2025-06-05 20:07:57.775500 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 days ago 1.05GB 2025-06-05 20:07:57.775511 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 days ago 1.05GB 2025-06-05 20:07:57.775528 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 5 days ago 1.04GB 2025-06-05 20:07:57.775540 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 5 days ago 1.04GB 2025-06-05 20:07:57.775551 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 days ago 1.3GB 2025-06-05 20:07:57.775562 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 days ago 1.29GB 2025-06-05 20:07:57.775573 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 days ago 1.42GB 2025-06-05 20:07:57.775585 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 days ago 1.29GB 2025-06-05 20:07:57.775596 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 days ago 1.06GB 2025-06-05 20:07:57.775607 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 days ago 1.06GB 2025-06-05 20:07:57.775619 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 days ago 1.06GB 2025-06-05 20:07:57.775631 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 days ago 1.11GB 2025-06-05 20:07:57.775641 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 days ago 1.13GB 2025-06-05 20:07:57.775652 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 days ago 1.11GB 2025-06-05 20:07:57.775663 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 5 days ago 1.11GB 2025-06-05 20:07:57.775675 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 5 days ago 1.12GB 2025-06-05 20:07:57.775685 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 days ago 947MB 2025-06-05 20:07:57.775697 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 days ago 947MB 2025-06-05 20:07:57.775707 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 days ago 948MB 2025-06-05 20:07:57.775718 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 days ago 948MB 2025-06-05 20:07:57.775729 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-05 20:07:58.073083 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-05 20:07:58.073183 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-05 20:07:58.126277 | orchestrator | 2025-06-05 20:07:58.126375 | orchestrator | ## Containers @ testbed-node-1 2025-06-05 20:07:58.126390 | orchestrator | 2025-06-05 20:07:58.126402 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-05 20:07:58.126413 | orchestrator | + echo 2025-06-05 20:07:58.126425 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-05 20:07:58.126438 | orchestrator | + echo 2025-06-05 20:07:58.126475 | orchestrator | + osism container testbed-node-1 ps 2025-06-05 20:08:00.292704 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-05 20:08:00.292811 | orchestrator | 82bb45d86664 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-05 20:08:00.292827 | orchestrator | 73150047e33a registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-05 20:08:00.292872 | orchestrator | 5f1615ad371a registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-05 20:08:00.292885 | orchestrator | 7f6339bbc74f registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-05 20:08:00.292916 | orchestrator | 4b168ef9a458 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-05 20:08:00.292928 | orchestrator | 7be32eb657bd registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-05 20:08:00.292941 | orchestrator | c563809f53ab registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-05 20:08:00.292952 | orchestrator | 2babd7d1d473 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-05 20:08:00.292964 | orchestrator | 2888097bb317 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-05 20:08:00.292978 | orchestrator | 13b5d576e950 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-05 20:08:00.292991 | orchestrator | 34a414e87c85 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-05 20:08:00.293002 | orchestrator | 91cc8900c8d3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-05 20:08:00.293014 | orchestrator | 3a7c85d7de9d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-05 20:08:00.293026 | orchestrator | f1c1dd83feda registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-05 20:08:00.293038 | orchestrator | 9e13f9071188 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-05 20:08:00.293049 | orchestrator | 3f4a348c40eb registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-05 20:08:00.293061 | orchestrator | bf6ed05d61e9 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-05 20:08:00.293096 | orchestrator | 745cd5b67b95 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-05 20:08:00.293108 | orchestrator | a30e13b5ce0d registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-05 20:08:00.293141 | orchestrator | 0b7d444e39b3 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-05 20:08:00.293153 | orchestrator | a30372695949 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-05 20:08:00.293165 | orchestrator | 8bda778021e7 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-05 20:08:00.293177 | orchestrator | 9fbca9e3250b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-05 20:08:00.293197 | orchestrator | b2141f3ecd09 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-05 20:08:00.293221 | orchestrator | 4f26ba65a208 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-06-05 20:08:00.293234 | orchestrator | 2313a5537b8c registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-05 20:08:00.293249 | orchestrator | 0ec452373ac1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-05 20:08:00.293262 | orchestrator | f65b11847ebc registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-05 20:08:00.293276 | orchestrator | 983adc0b9542 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-05 20:08:00.293290 | orchestrator | 62797abfbbe5 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-05 20:08:00.293303 | orchestrator | 4667647545c7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-05 20:08:00.293316 | orchestrator | 443c25ad6dbd registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-05 20:08:00.293330 | orchestrator | e4641c0f57fd registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-05 20:08:00.293343 | orchestrator | 7ab20be04f08 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-05 20:08:00.293356 | orchestrator | 94d6331bd4c3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-05 20:08:00.293379 | orchestrator | 5552c0ca045d registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-05 20:08:00.293393 | orchestrator | a85ab7fc5fbf registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-05 20:08:00.293405 | orchestrator | 5db3578ada17 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-05 20:08:00.293419 | orchestrator | 772a1b0c6fb2 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 25 minutes ovn_northd 2025-06-05 20:08:00.293432 | orchestrator | 431eac40b18a registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-05 20:08:00.293451 | orchestrator | 8d23b7fb75e7 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-05 20:08:00.293465 | orchestrator | 3bb794e61b5b registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-05 20:08:00.293479 | orchestrator | d6cde0009f7e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-05 20:08:00.293492 | orchestrator | f30135d76a37 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-1 2025-06-05 20:08:00.293505 | orchestrator | f68914bdbe57 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-05 20:08:00.293518 | orchestrator | e72d204c51d4 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-05 20:08:00.293532 | orchestrator | ab0fc3f157c4 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-05 20:08:00.293551 | orchestrator | 780671546bed registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-05 20:08:00.293565 | orchestrator | 270f6342a06c registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-05 20:08:00.293578 | orchestrator | 08a5ea49eda1 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-05 20:08:00.293591 | orchestrator | b6baca6192d1 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes kolla_toolbox 2025-06-05 20:08:00.293602 | orchestrator | f950a3d784c3 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-05 20:08:00.548011 | orchestrator | 2025-06-05 20:08:00.548111 | orchestrator | ## Images @ testbed-node-1 2025-06-05 20:08:00.548127 | orchestrator | 2025-06-05 20:08:00.548139 | orchestrator | + echo 2025-06-05 20:08:00.548151 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-05 20:08:00.548163 | orchestrator | + echo 2025-06-05 20:08:00.548175 | orchestrator | + osism container testbed-node-1 images 2025-06-05 20:08:02.605189 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-05 20:08:02.605333 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 days ago 319MB 2025-06-05 20:08:02.605349 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 days ago 319MB 2025-06-05 20:08:02.605361 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 days ago 330MB 2025-06-05 20:08:02.605372 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 days ago 1.59GB 2025-06-05 20:08:02.605384 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 days ago 1.55GB 2025-06-05 20:08:02.605395 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 days ago 419MB 2025-06-05 20:08:02.605406 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 days ago 747MB 2025-06-05 20:08:02.605417 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 days ago 376MB 2025-06-05 20:08:02.605427 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 days ago 327MB 2025-06-05 20:08:02.605439 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 days ago 629MB 2025-06-05 20:08:02.605449 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 days ago 1.01GB 2025-06-05 20:08:02.605461 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 days ago 591MB 2025-06-05 20:08:02.605472 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 days ago 354MB 2025-06-05 20:08:02.605483 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 days ago 411MB 2025-06-05 20:08:02.605494 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 days ago 352MB 2025-06-05 20:08:02.605505 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 days ago 345MB 2025-06-05 20:08:02.605516 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 days ago 359MB 2025-06-05 20:08:02.605527 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 days ago 325MB 2025-06-05 20:08:02.605538 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 days ago 326MB 2025-06-05 20:08:02.605548 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 days ago 1.21GB 2025-06-05 20:08:02.605560 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 days ago 362MB 2025-06-05 20:08:02.605571 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 days ago 362MB 2025-06-05 20:08:02.605582 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 days ago 1.15GB 2025-06-05 20:08:02.605592 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 days ago 1.04GB 2025-06-05 20:08:02.605603 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 days ago 1.25GB 2025-06-05 20:08:02.605614 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 days ago 1.2GB 2025-06-05 20:08:02.605632 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 days ago 1.31GB 2025-06-05 20:08:02.605643 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 days ago 1.41GB 2025-06-05 20:08:02.605654 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 days ago 1.41GB 2025-06-05 20:08:02.605665 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 days ago 1.06GB 2025-06-05 20:08:02.605676 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 days ago 1.06GB 2025-06-05 20:08:02.605723 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 days ago 1.05GB 2025-06-05 20:08:02.605738 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 days ago 1.05GB 2025-06-05 20:08:02.605752 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 days ago 1.05GB 2025-06-05 20:08:02.605765 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 days ago 1.05GB 2025-06-05 20:08:02.605778 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 days ago 1.3GB 2025-06-05 20:08:02.605792 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 days ago 1.29GB 2025-06-05 20:08:02.605805 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 days ago 1.42GB 2025-06-05 20:08:02.605818 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 days ago 1.29GB 2025-06-05 20:08:02.605831 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 days ago 1.06GB 2025-06-05 20:08:02.605880 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 days ago 1.06GB 2025-06-05 20:08:02.605901 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 days ago 1.06GB 2025-06-05 20:08:02.605927 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 days ago 1.11GB 2025-06-05 20:08:02.605945 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 days ago 1.13GB 2025-06-05 20:08:02.605957 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 days ago 1.11GB 2025-06-05 20:08:02.605967 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 days ago 947MB 2025-06-05 20:08:02.605978 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 days ago 947MB 2025-06-05 20:08:02.605989 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 days ago 948MB 2025-06-05 20:08:02.606000 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 days ago 948MB 2025-06-05 20:08:02.606011 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-05 20:08:02.842674 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-05 20:08:02.843006 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-05 20:08:02.897781 | orchestrator | 2025-06-05 20:08:02.897901 | orchestrator | ## Containers @ testbed-node-2 2025-06-05 20:08:02.897917 | orchestrator | 2025-06-05 20:08:02.897929 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-05 20:08:02.897941 | orchestrator | + echo 2025-06-05 20:08:02.897977 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-05 20:08:02.897990 | orchestrator | + echo 2025-06-05 20:08:02.898001 | orchestrator | + osism container testbed-node-2 ps 2025-06-05 20:08:05.038954 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-05 20:08:05.039071 | orchestrator | 35076ee892d5 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-05 20:08:05.039089 | orchestrator | 9288e1e6b8ff registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-05 20:08:05.039101 | orchestrator | 32f9bdc1d47e registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-05 20:08:05.040779 | orchestrator | d0e273d36026 registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-05 20:08:05.040802 | orchestrator | 183b013d7d70 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-05 20:08:05.040814 | orchestrator | 40ab908a3ba1 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-05 20:08:05.040825 | orchestrator | 92666109b2f5 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-05 20:08:05.040891 | orchestrator | 6f4e393e5acd registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-05 20:08:05.040905 | orchestrator | 43ff729ddc97 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-05 20:08:05.040918 | orchestrator | 17637dd82845 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-05 20:08:05.040930 | orchestrator | 63891c269004 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_memcached_exporter 2025-06-05 20:08:05.040941 | orchestrator | 142be5765ae1 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_mysqld_exporter 2025-06-05 20:08:05.040952 | orchestrator | 7e7570dd8adb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_node_exporter 2025-06-05 20:08:05.040963 | orchestrator | c3edb661f46b registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-05 20:08:05.040992 | orchestrator | 9863ce3b5893 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-05 20:08:05.041004 | orchestrator | 7afd12a4178a registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-05 20:08:05.041015 | orchestrator | f9cf300e05a2 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-06-05 20:08:05.041047 | orchestrator | f4eccf92ef9a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-05 20:08:05.041058 | orchestrator | 1aa91d2a307c registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-05 20:08:05.041069 | orchestrator | f47aa17e1205 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-05 20:08:05.041080 | orchestrator | 2a2e230a37d5 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-05 20:08:05.041091 | orchestrator | d31e7e856c53 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-05 20:08:05.041102 | orchestrator | 814b5f42fde3 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-05 20:08:05.041113 | orchestrator | 8fab1dd918ab registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-05 20:08:05.041136 | orchestrator | 3561f4fc18a6 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-06-05 20:08:05.041147 | orchestrator | 2e0981bee5f3 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) placement_api 2025-06-05 20:08:05.041166 | orchestrator | ce5a1e1004e8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-06-05 20:08:05.041185 | orchestrator | 71b7b2880388 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-05 20:08:05.041203 | orchestrator | 50da57d1fd03 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-05 20:08:05.041222 | orchestrator | 8699395f7016 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-05 20:08:05.041240 | orchestrator | e435e741643e registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-05 20:08:05.041251 | orchestrator | 3c04ec5efdd2 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-05 20:08:05.041262 | orchestrator | 3fe9f7394be8 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-05 20:08:05.041273 | orchestrator | 47766dd74e5a registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-05 20:08:05.041284 | orchestrator | 1ea97b744451 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-05 20:08:05.041304 | orchestrator | b04d232a4f6a registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 23 minutes keepalived 2025-06-05 20:08:05.041315 | orchestrator | c4f37a544cc4 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-05 20:08:05.041326 | orchestrator | d9d6ddfbd856 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-05 20:08:05.041337 | orchestrator | a3ef22c9e237 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-05 20:08:05.041348 | orchestrator | e4daae3507f8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_sb_db 2025-06-05 20:08:05.041359 | orchestrator | 76366d9d6ada registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_nb_db 2025-06-05 20:08:05.041370 | orchestrator | 277026a29dcb registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-05 20:08:05.041381 | orchestrator | 34ef953b84c8 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-05 20:08:05.041393 | orchestrator | b5c14b434f17 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-2 2025-06-05 20:08:05.041404 | orchestrator | 220c076463d8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-05 20:08:05.041423 | orchestrator | 26673d187f93 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-05 20:08:05.041435 | orchestrator | f1e21f8c3861 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-05 20:08:05.041452 | orchestrator | 684aa5065895 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-05 20:08:05.041464 | orchestrator | ae68c49ef070 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-05 20:08:05.041482 | orchestrator | 64430facc5d5 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-05 20:08:05.041501 | orchestrator | 59a213c90c19 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 29 minutes kolla_toolbox 2025-06-05 20:08:05.041521 | orchestrator | e3bed80b6c98 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-05 20:08:05.371546 | orchestrator | 2025-06-05 20:08:05.371646 | orchestrator | ## Images @ testbed-node-2 2025-06-05 20:08:05.371663 | orchestrator | 2025-06-05 20:08:05.371675 | orchestrator | + echo 2025-06-05 20:08:05.371687 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-05 20:08:05.371700 | orchestrator | + echo 2025-06-05 20:08:05.371736 | orchestrator | + osism container testbed-node-2 images 2025-06-05 20:08:07.445435 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-05 20:08:07.445543 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 5 days ago 319MB 2025-06-05 20:08:07.445558 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 5 days ago 319MB 2025-06-05 20:08:07.445570 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 5 days ago 330MB 2025-06-05 20:08:07.445599 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 5 days ago 1.59GB 2025-06-05 20:08:07.445611 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 5 days ago 1.55GB 2025-06-05 20:08:07.445622 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 5 days ago 419MB 2025-06-05 20:08:07.445633 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 5 days ago 747MB 2025-06-05 20:08:07.445644 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 5 days ago 376MB 2025-06-05 20:08:07.445656 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 5 days ago 327MB 2025-06-05 20:08:07.445666 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 5 days ago 629MB 2025-06-05 20:08:07.445677 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 5 days ago 1.01GB 2025-06-05 20:08:07.445689 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 5 days ago 591MB 2025-06-05 20:08:07.445700 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 5 days ago 354MB 2025-06-05 20:08:07.445710 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 5 days ago 411MB 2025-06-05 20:08:07.445721 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 5 days ago 352MB 2025-06-05 20:08:07.445733 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 5 days ago 345MB 2025-06-05 20:08:07.445745 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 5 days ago 359MB 2025-06-05 20:08:07.445756 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 5 days ago 325MB 2025-06-05 20:08:07.445767 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 5 days ago 326MB 2025-06-05 20:08:07.445778 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 5 days ago 1.21GB 2025-06-05 20:08:07.445790 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 5 days ago 362MB 2025-06-05 20:08:07.445801 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 5 days ago 362MB 2025-06-05 20:08:07.445812 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 5 days ago 1.15GB 2025-06-05 20:08:07.445823 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 5 days ago 1.04GB 2025-06-05 20:08:07.445834 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 5 days ago 1.25GB 2025-06-05 20:08:07.445939 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 5 days ago 1.2GB 2025-06-05 20:08:07.445951 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 5 days ago 1.31GB 2025-06-05 20:08:07.445962 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 5 days ago 1.41GB 2025-06-05 20:08:07.445976 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 5 days ago 1.41GB 2025-06-05 20:08:07.445988 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 5 days ago 1.06GB 2025-06-05 20:08:07.446001 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 5 days ago 1.06GB 2025-06-05 20:08:07.446099 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 5 days ago 1.05GB 2025-06-05 20:08:07.446115 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 5 days ago 1.05GB 2025-06-05 20:08:07.446127 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 5 days ago 1.05GB 2025-06-05 20:08:07.446138 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 5 days ago 1.05GB 2025-06-05 20:08:07.446149 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 5 days ago 1.3GB 2025-06-05 20:08:07.446160 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 5 days ago 1.29GB 2025-06-05 20:08:07.446171 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 5 days ago 1.42GB 2025-06-05 20:08:07.446182 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 5 days ago 1.29GB 2025-06-05 20:08:07.446193 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 5 days ago 1.06GB 2025-06-05 20:08:07.446204 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 5 days ago 1.06GB 2025-06-05 20:08:07.446215 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 5 days ago 1.06GB 2025-06-05 20:08:07.446226 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 5 days ago 1.11GB 2025-06-05 20:08:07.446237 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 5 days ago 1.13GB 2025-06-05 20:08:07.446248 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 5 days ago 1.11GB 2025-06-05 20:08:07.446259 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 5 days ago 947MB 2025-06-05 20:08:07.446270 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 5 days ago 948MB 2025-06-05 20:08:07.446281 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 5 days ago 947MB 2025-06-05 20:08:07.446302 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 5 days ago 948MB 2025-06-05 20:08:07.446313 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 3 weeks ago 1.27GB 2025-06-05 20:08:07.685675 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-05 20:08:07.694569 | orchestrator | + set -e 2025-06-05 20:08:07.694608 | orchestrator | + source /opt/manager-vars.sh 2025-06-05 20:08:07.695712 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-05 20:08:07.695781 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-05 20:08:07.695802 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-05 20:08:07.695822 | orchestrator | ++ CEPH_VERSION=reef 2025-06-05 20:08:07.695900 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-05 20:08:07.695923 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-05 20:08:07.695941 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 20:08:07.695953 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 20:08:07.695964 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-05 20:08:07.695975 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-05 20:08:07.695986 | orchestrator | ++ export ARA=false 2025-06-05 20:08:07.695997 | orchestrator | ++ ARA=false 2025-06-05 20:08:07.696009 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-05 20:08:07.696020 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-05 20:08:07.696031 | orchestrator | ++ export TEMPEST=false 2025-06-05 20:08:07.696042 | orchestrator | ++ TEMPEST=false 2025-06-05 20:08:07.696053 | orchestrator | ++ export IS_ZUUL=true 2025-06-05 20:08:07.696064 | orchestrator | ++ IS_ZUUL=true 2025-06-05 20:08:07.696075 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 20:08:07.696086 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 20:08:07.696097 | orchestrator | ++ export EXTERNAL_API=false 2025-06-05 20:08:07.696108 | orchestrator | ++ EXTERNAL_API=false 2025-06-05 20:08:07.696119 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-05 20:08:07.696130 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-05 20:08:07.696140 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-05 20:08:07.696151 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-05 20:08:07.696162 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-05 20:08:07.696173 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-05 20:08:07.696201 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-05 20:08:07.696212 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-05 20:08:07.706538 | orchestrator | + set -e 2025-06-05 20:08:07.706569 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 20:08:07.706581 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 20:08:07.706593 | orchestrator | ++ INTERACTIVE=false 2025-06-05 20:08:07.706611 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 20:08:07.706622 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 20:08:07.706957 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-05 20:08:07.707824 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-05 20:08:07.712152 | orchestrator | 2025-06-05 20:08:07.712264 | orchestrator | # Ceph status 2025-06-05 20:08:07.712278 | orchestrator | 2025-06-05 20:08:07.712290 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 20:08:07.712302 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 20:08:07.712314 | orchestrator | + echo 2025-06-05 20:08:07.712330 | orchestrator | + echo '# Ceph status' 2025-06-05 20:08:07.712342 | orchestrator | + echo 2025-06-05 20:08:07.712355 | orchestrator | + ceph -s 2025-06-05 20:08:08.292176 | orchestrator | cluster: 2025-06-05 20:08:08.292281 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-05 20:08:08.292299 | orchestrator | health: HEALTH_OK 2025-06-05 20:08:08.292312 | orchestrator | 2025-06-05 20:08:08.292324 | orchestrator | services: 2025-06-05 20:08:08.292335 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 27m) 2025-06-05 20:08:08.292348 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-06-05 20:08:08.292375 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-05 20:08:08.292387 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2025-06-05 20:08:08.292410 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-05 20:08:08.292421 | orchestrator | 2025-06-05 20:08:08.292432 | orchestrator | data: 2025-06-05 20:08:08.292443 | orchestrator | volumes: 1/1 healthy 2025-06-05 20:08:08.292454 | orchestrator | pools: 14 pools, 417 pgs 2025-06-05 20:08:08.292465 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-05 20:08:08.292476 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-05 20:08:08.292487 | orchestrator | pgs: 417 active+clean 2025-06-05 20:08:08.292498 | orchestrator | 2025-06-05 20:08:08.335391 | orchestrator | 2025-06-05 20:08:08.335467 | orchestrator | # Ceph versions 2025-06-05 20:08:08.335479 | orchestrator | 2025-06-05 20:08:08.335491 | orchestrator | + echo 2025-06-05 20:08:08.335503 | orchestrator | + echo '# Ceph versions' 2025-06-05 20:08:08.335514 | orchestrator | + echo 2025-06-05 20:08:08.335525 | orchestrator | + ceph versions 2025-06-05 20:08:08.925409 | orchestrator | { 2025-06-05 20:08:08.925517 | orchestrator | "mon": { 2025-06-05 20:08:08.925562 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-05 20:08:08.925576 | orchestrator | }, 2025-06-05 20:08:08.925587 | orchestrator | "mgr": { 2025-06-05 20:08:08.925598 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-05 20:08:08.925610 | orchestrator | }, 2025-06-05 20:08:08.925620 | orchestrator | "osd": { 2025-06-05 20:08:08.925632 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-05 20:08:08.925643 | orchestrator | }, 2025-06-05 20:08:08.925668 | orchestrator | "mds": { 2025-06-05 20:08:08.925697 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-05 20:08:08.925709 | orchestrator | }, 2025-06-05 20:08:08.925720 | orchestrator | "rgw": { 2025-06-05 20:08:08.925731 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-05 20:08:08.925742 | orchestrator | }, 2025-06-05 20:08:08.925753 | orchestrator | "overall": { 2025-06-05 20:08:08.925765 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-05 20:08:08.925777 | orchestrator | } 2025-06-05 20:08:08.925788 | orchestrator | } 2025-06-05 20:08:08.973162 | orchestrator | 2025-06-05 20:08:08.973245 | orchestrator | # Ceph OSD tree 2025-06-05 20:08:08.973258 | orchestrator | 2025-06-05 20:08:08.973268 | orchestrator | + echo 2025-06-05 20:08:08.973279 | orchestrator | + echo '# Ceph OSD tree' 2025-06-05 20:08:08.973289 | orchestrator | + echo 2025-06-05 20:08:08.973299 | orchestrator | + ceph osd df tree 2025-06-05 20:08:09.483736 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-05 20:08:09.483899 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-05 20:08:09.483916 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-05 20:08:09.483928 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.26 1.06 183 up osd.0 2025-06-05 20:08:09.483939 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.57 0.94 221 up osd.3 2025-06-05 20:08:09.483950 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-05 20:08:09.483961 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.38 1.08 218 up osd.1 2025-06-05 20:08:09.483972 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 74 MiB 19 GiB 5.46 0.92 188 up osd.5 2025-06-05 20:08:09.483982 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-05 20:08:09.483993 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.71 1.13 207 up osd.2 2025-06-05 20:08:09.484004 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.0 GiB 979 MiB 1 KiB 70 MiB 19 GiB 5.12 0.87 201 up osd.4 2025-06-05 20:08:09.484015 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-05 20:08:09.484027 | orchestrator | MIN/MAX VAR: 0.87/1.13 STDDEV: 0.57 2025-06-05 20:08:09.530109 | orchestrator | 2025-06-05 20:08:09.530192 | orchestrator | # Ceph monitor status 2025-06-05 20:08:09.530206 | orchestrator | 2025-06-05 20:08:09.530218 | orchestrator | + echo 2025-06-05 20:08:09.530229 | orchestrator | + echo '# Ceph monitor status' 2025-06-05 20:08:09.530241 | orchestrator | + echo 2025-06-05 20:08:09.530253 | orchestrator | + ceph mon stat 2025-06-05 20:08:10.100733 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-05 20:08:10.142269 | orchestrator | 2025-06-05 20:08:10.142393 | orchestrator | # Ceph quorum status 2025-06-05 20:08:10.142411 | orchestrator | 2025-06-05 20:08:10.142423 | orchestrator | + echo 2025-06-05 20:08:10.142435 | orchestrator | + echo '# Ceph quorum status' 2025-06-05 20:08:10.142447 | orchestrator | + echo 2025-06-05 20:08:10.142471 | orchestrator | + ceph quorum_status 2025-06-05 20:08:10.142484 | orchestrator | + jq 2025-06-05 20:08:10.762548 | orchestrator | { 2025-06-05 20:08:10.762647 | orchestrator | "election_epoch": 8, 2025-06-05 20:08:10.762665 | orchestrator | "quorum": [ 2025-06-05 20:08:10.762678 | orchestrator | 0, 2025-06-05 20:08:10.762690 | orchestrator | 1, 2025-06-05 20:08:10.762702 | orchestrator | 2 2025-06-05 20:08:10.762714 | orchestrator | ], 2025-06-05 20:08:10.762726 | orchestrator | "quorum_names": [ 2025-06-05 20:08:10.762739 | orchestrator | "testbed-node-0", 2025-06-05 20:08:10.762750 | orchestrator | "testbed-node-1", 2025-06-05 20:08:10.762762 | orchestrator | "testbed-node-2" 2025-06-05 20:08:10.762774 | orchestrator | ], 2025-06-05 20:08:10.762787 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-05 20:08:10.762800 | orchestrator | "quorum_age": 1671, 2025-06-05 20:08:10.762812 | orchestrator | "features": { 2025-06-05 20:08:10.762824 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-05 20:08:10.762861 | orchestrator | "quorum_mon": [ 2025-06-05 20:08:10.762874 | orchestrator | "kraken", 2025-06-05 20:08:10.762885 | orchestrator | "luminous", 2025-06-05 20:08:10.762896 | orchestrator | "mimic", 2025-06-05 20:08:10.762907 | orchestrator | "osdmap-prune", 2025-06-05 20:08:10.762918 | orchestrator | "nautilus", 2025-06-05 20:08:10.762929 | orchestrator | "octopus", 2025-06-05 20:08:10.762940 | orchestrator | "pacific", 2025-06-05 20:08:10.762951 | orchestrator | "elector-pinging", 2025-06-05 20:08:10.762962 | orchestrator | "quincy", 2025-06-05 20:08:10.762973 | orchestrator | "reef" 2025-06-05 20:08:10.762984 | orchestrator | ] 2025-06-05 20:08:10.762995 | orchestrator | }, 2025-06-05 20:08:10.763006 | orchestrator | "monmap": { 2025-06-05 20:08:10.763017 | orchestrator | "epoch": 1, 2025-06-05 20:08:10.763028 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-05 20:08:10.763040 | orchestrator | "modified": "2025-06-05T19:40:00.126667Z", 2025-06-05 20:08:10.763052 | orchestrator | "created": "2025-06-05T19:40:00.126667Z", 2025-06-05 20:08:10.763063 | orchestrator | "min_mon_release": 18, 2025-06-05 20:08:10.763074 | orchestrator | "min_mon_release_name": "reef", 2025-06-05 20:08:10.763085 | orchestrator | "election_strategy": 1, 2025-06-05 20:08:10.763096 | orchestrator | "disallowed_leaders: ": "", 2025-06-05 20:08:10.763107 | orchestrator | "stretch_mode": false, 2025-06-05 20:08:10.763120 | orchestrator | "tiebreaker_mon": "", 2025-06-05 20:08:10.763133 | orchestrator | "removed_ranks: ": "", 2025-06-05 20:08:10.763145 | orchestrator | "features": { 2025-06-05 20:08:10.763158 | orchestrator | "persistent": [ 2025-06-05 20:08:10.763170 | orchestrator | "kraken", 2025-06-05 20:08:10.763183 | orchestrator | "luminous", 2025-06-05 20:08:10.763196 | orchestrator | "mimic", 2025-06-05 20:08:10.763208 | orchestrator | "osdmap-prune", 2025-06-05 20:08:10.763221 | orchestrator | "nautilus", 2025-06-05 20:08:10.763233 | orchestrator | "octopus", 2025-06-05 20:08:10.763245 | orchestrator | "pacific", 2025-06-05 20:08:10.763258 | orchestrator | "elector-pinging", 2025-06-05 20:08:10.763272 | orchestrator | "quincy", 2025-06-05 20:08:10.763285 | orchestrator | "reef" 2025-06-05 20:08:10.763298 | orchestrator | ], 2025-06-05 20:08:10.763312 | orchestrator | "optional": [] 2025-06-05 20:08:10.763325 | orchestrator | }, 2025-06-05 20:08:10.763338 | orchestrator | "mons": [ 2025-06-05 20:08:10.763351 | orchestrator | { 2025-06-05 20:08:10.763364 | orchestrator | "rank": 0, 2025-06-05 20:08:10.763377 | orchestrator | "name": "testbed-node-0", 2025-06-05 20:08:10.763390 | orchestrator | "public_addrs": { 2025-06-05 20:08:10.763403 | orchestrator | "addrvec": [ 2025-06-05 20:08:10.763416 | orchestrator | { 2025-06-05 20:08:10.763429 | orchestrator | "type": "v2", 2025-06-05 20:08:10.763443 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-05 20:08:10.763455 | orchestrator | "nonce": 0 2025-06-05 20:08:10.763468 | orchestrator | }, 2025-06-05 20:08:10.763479 | orchestrator | { 2025-06-05 20:08:10.763490 | orchestrator | "type": "v1", 2025-06-05 20:08:10.763502 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-05 20:08:10.763513 | orchestrator | "nonce": 0 2025-06-05 20:08:10.763524 | orchestrator | } 2025-06-05 20:08:10.763535 | orchestrator | ] 2025-06-05 20:08:10.763546 | orchestrator | }, 2025-06-05 20:08:10.763557 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-05 20:08:10.763593 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-05 20:08:10.763604 | orchestrator | "priority": 0, 2025-06-05 20:08:10.763616 | orchestrator | "weight": 0, 2025-06-05 20:08:10.763627 | orchestrator | "crush_location": "{}" 2025-06-05 20:08:10.763638 | orchestrator | }, 2025-06-05 20:08:10.763649 | orchestrator | { 2025-06-05 20:08:10.763660 | orchestrator | "rank": 1, 2025-06-05 20:08:10.763671 | orchestrator | "name": "testbed-node-1", 2025-06-05 20:08:10.763682 | orchestrator | "public_addrs": { 2025-06-05 20:08:10.763693 | orchestrator | "addrvec": [ 2025-06-05 20:08:10.763704 | orchestrator | { 2025-06-05 20:08:10.763715 | orchestrator | "type": "v2", 2025-06-05 20:08:10.763726 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-05 20:08:10.763737 | orchestrator | "nonce": 0 2025-06-05 20:08:10.763748 | orchestrator | }, 2025-06-05 20:08:10.763760 | orchestrator | { 2025-06-05 20:08:10.763770 | orchestrator | "type": "v1", 2025-06-05 20:08:10.763782 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-05 20:08:10.763792 | orchestrator | "nonce": 0 2025-06-05 20:08:10.763803 | orchestrator | } 2025-06-05 20:08:10.763814 | orchestrator | ] 2025-06-05 20:08:10.763825 | orchestrator | }, 2025-06-05 20:08:10.763853 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-05 20:08:10.763865 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-05 20:08:10.763876 | orchestrator | "priority": 0, 2025-06-05 20:08:10.763887 | orchestrator | "weight": 0, 2025-06-05 20:08:10.763898 | orchestrator | "crush_location": "{}" 2025-06-05 20:08:10.763909 | orchestrator | }, 2025-06-05 20:08:10.763920 | orchestrator | { 2025-06-05 20:08:10.763931 | orchestrator | "rank": 2, 2025-06-05 20:08:10.763942 | orchestrator | "name": "testbed-node-2", 2025-06-05 20:08:10.763953 | orchestrator | "public_addrs": { 2025-06-05 20:08:10.763964 | orchestrator | "addrvec": [ 2025-06-05 20:08:10.763975 | orchestrator | { 2025-06-05 20:08:10.763986 | orchestrator | "type": "v2", 2025-06-05 20:08:10.763997 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-05 20:08:10.764008 | orchestrator | "nonce": 0 2025-06-05 20:08:10.764019 | orchestrator | }, 2025-06-05 20:08:10.764030 | orchestrator | { 2025-06-05 20:08:10.764040 | orchestrator | "type": "v1", 2025-06-05 20:08:10.764052 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-05 20:08:10.764062 | orchestrator | "nonce": 0 2025-06-05 20:08:10.764074 | orchestrator | } 2025-06-05 20:08:10.764084 | orchestrator | ] 2025-06-05 20:08:10.764095 | orchestrator | }, 2025-06-05 20:08:10.764106 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-05 20:08:10.764117 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-05 20:08:10.764128 | orchestrator | "priority": 0, 2025-06-05 20:08:10.764139 | orchestrator | "weight": 0, 2025-06-05 20:08:10.764150 | orchestrator | "crush_location": "{}" 2025-06-05 20:08:10.764161 | orchestrator | } 2025-06-05 20:08:10.764172 | orchestrator | ] 2025-06-05 20:08:10.764183 | orchestrator | } 2025-06-05 20:08:10.764194 | orchestrator | } 2025-06-05 20:08:10.764205 | orchestrator | 2025-06-05 20:08:10.764216 | orchestrator | # Ceph free space status 2025-06-05 20:08:10.764227 | orchestrator | 2025-06-05 20:08:10.764239 | orchestrator | + echo 2025-06-05 20:08:10.764250 | orchestrator | + echo '# Ceph free space status' 2025-06-05 20:08:10.764261 | orchestrator | + echo 2025-06-05 20:08:10.764272 | orchestrator | + ceph df 2025-06-05 20:08:11.322985 | orchestrator | --- RAW STORAGE --- 2025-06-05 20:08:11.323091 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-05 20:08:11.323115 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-05 20:08:11.323127 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-05 20:08:11.323139 | orchestrator | 2025-06-05 20:08:11.323152 | orchestrator | --- POOLS --- 2025-06-05 20:08:11.323164 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-05 20:08:11.323176 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-05 20:08:11.323188 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-05 20:08:11.323199 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-05 20:08:11.323210 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-05 20:08:11.323221 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-05 20:08:11.323255 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-05 20:08:11.323266 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-05 20:08:11.323277 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-05 20:08:11.323288 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-05 20:08:11.323299 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-05 20:08:11.323310 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-05 20:08:11.323321 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-06-05 20:08:11.323332 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-05 20:08:11.323343 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-05 20:08:11.366164 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-05 20:08:11.426120 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-05 20:08:11.426210 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-05 20:08:11.426225 | orchestrator | + osism apply facts 2025-06-05 20:08:13.054640 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:08:13.054712 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:08:13.054718 | orchestrator | Registering Redlock._release_script 2025-06-05 20:08:13.110700 | orchestrator | 2025-06-05 20:08:13 | INFO  | Task c8790ed4-1129-4f45-b059-038cf0e8c035 (facts) was prepared for execution. 2025-06-05 20:08:13.110791 | orchestrator | 2025-06-05 20:08:13 | INFO  | It takes a moment until task c8790ed4-1129-4f45-b059-038cf0e8c035 (facts) has been started and output is visible here. 2025-06-05 20:08:17.156471 | orchestrator | 2025-06-05 20:08:17.156591 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-05 20:08:17.158557 | orchestrator | 2025-06-05 20:08:17.159554 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-05 20:08:17.160286 | orchestrator | Thursday 05 June 2025 20:08:17 +0000 (0:00:00.256) 0:00:00.256 ********* 2025-06-05 20:08:18.570325 | orchestrator | ok: [testbed-manager] 2025-06-05 20:08:18.570427 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:18.572898 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:18.572951 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:18.574734 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:08:18.574781 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:08:18.574798 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:08:18.575285 | orchestrator | 2025-06-05 20:08:18.576016 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-05 20:08:18.577037 | orchestrator | Thursday 05 June 2025 20:08:18 +0000 (0:00:01.413) 0:00:01.669 ********* 2025-06-05 20:08:18.732377 | orchestrator | skipping: [testbed-manager] 2025-06-05 20:08:18.817433 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:18.902881 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:08:18.976102 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:08:19.066391 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:08:19.825141 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:08:19.826218 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:08:19.828811 | orchestrator | 2025-06-05 20:08:19.828866 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-05 20:08:19.829615 | orchestrator | 2025-06-05 20:08:19.830769 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-05 20:08:19.831883 | orchestrator | Thursday 05 June 2025 20:08:19 +0000 (0:00:01.258) 0:00:02.928 ********* 2025-06-05 20:08:24.917819 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:24.918490 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:24.919387 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:24.923486 | orchestrator | ok: [testbed-manager] 2025-06-05 20:08:24.923534 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:08:24.923546 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:08:24.923587 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:08:24.923600 | orchestrator | 2025-06-05 20:08:24.923621 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-05 20:08:24.923881 | orchestrator | 2025-06-05 20:08:24.924551 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-05 20:08:24.925077 | orchestrator | Thursday 05 June 2025 20:08:24 +0000 (0:00:05.093) 0:00:08.021 ********* 2025-06-05 20:08:25.081126 | orchestrator | skipping: [testbed-manager] 2025-06-05 20:08:25.158655 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:25.239545 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:08:25.326242 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:08:25.406452 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:08:25.447355 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:08:25.447467 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:08:25.448140 | orchestrator | 2025-06-05 20:08:25.449325 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:08:25.449813 | orchestrator | 2025-06-05 20:08:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 20:08:25.449922 | orchestrator | 2025-06-05 20:08:25 | INFO  | Please wait and do not abort execution. 2025-06-05 20:08:25.450271 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.450890 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.451214 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.452347 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.454278 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.454296 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.454708 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:25.455268 | orchestrator | 2025-06-05 20:08:25.456625 | orchestrator | 2025-06-05 20:08:25.456653 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:08:25.456974 | orchestrator | Thursday 05 June 2025 20:08:25 +0000 (0:00:00.529) 0:00:08.550 ********* 2025-06-05 20:08:25.457818 | orchestrator | =============================================================================== 2025-06-05 20:08:25.458932 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.09s 2025-06-05 20:08:25.459271 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.41s 2025-06-05 20:08:25.459755 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.26s 2025-06-05 20:08:25.460764 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-05 20:08:26.129130 | orchestrator | + osism validate ceph-mons 2025-06-05 20:08:27.800515 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:08:27.800640 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:08:27.800665 | orchestrator | Registering Redlock._release_script 2025-06-05 20:08:48.252589 | orchestrator | 2025-06-05 20:08:48.252717 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-05 20:08:48.252735 | orchestrator | 2025-06-05 20:08:48.252749 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-05 20:08:48.252760 | orchestrator | Thursday 05 June 2025 20:08:32 +0000 (0:00:00.420) 0:00:00.420 ********* 2025-06-05 20:08:48.252793 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.252804 | orchestrator | 2025-06-05 20:08:48.252815 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-05 20:08:48.252826 | orchestrator | Thursday 05 June 2025 20:08:33 +0000 (0:00:01.657) 0:00:02.078 ********* 2025-06-05 20:08:48.252890 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.252904 | orchestrator | 2025-06-05 20:08:48.252915 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-05 20:08:48.252926 | orchestrator | Thursday 05 June 2025 20:08:34 +0000 (0:00:00.828) 0:00:02.907 ********* 2025-06-05 20:08:48.252937 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.252948 | orchestrator | 2025-06-05 20:08:48.252959 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-05 20:08:48.252971 | orchestrator | Thursday 05 June 2025 20:08:34 +0000 (0:00:00.261) 0:00:03.169 ********* 2025-06-05 20:08:48.252981 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.252993 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:48.253004 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:48.253015 | orchestrator | 2025-06-05 20:08:48.253026 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-05 20:08:48.253036 | orchestrator | Thursday 05 June 2025 20:08:35 +0000 (0:00:00.282) 0:00:03.451 ********* 2025-06-05 20:08:48.253047 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:48.253058 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:48.253068 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.253079 | orchestrator | 2025-06-05 20:08:48.253090 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-05 20:08:48.253101 | orchestrator | Thursday 05 June 2025 20:08:36 +0000 (0:00:01.044) 0:00:04.496 ********* 2025-06-05 20:08:48.253117 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253137 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:08:48.253156 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:08:48.253175 | orchestrator | 2025-06-05 20:08:48.253196 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-05 20:08:48.253216 | orchestrator | Thursday 05 June 2025 20:08:36 +0000 (0:00:00.276) 0:00:04.772 ********* 2025-06-05 20:08:48.253258 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.253277 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:48.253296 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:48.253317 | orchestrator | 2025-06-05 20:08:48.253337 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:08:48.253355 | orchestrator | Thursday 05 June 2025 20:08:36 +0000 (0:00:00.494) 0:00:05.267 ********* 2025-06-05 20:08:48.253374 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.253385 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:48.253396 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:48.253406 | orchestrator | 2025-06-05 20:08:48.253417 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-05 20:08:48.253428 | orchestrator | Thursday 05 June 2025 20:08:37 +0000 (0:00:00.303) 0:00:05.571 ********* 2025-06-05 20:08:48.253453 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253464 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:08:48.253475 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:08:48.253486 | orchestrator | 2025-06-05 20:08:48.253498 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-05 20:08:48.253509 | orchestrator | Thursday 05 June 2025 20:08:37 +0000 (0:00:00.300) 0:00:05.871 ********* 2025-06-05 20:08:48.253531 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.253542 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:08:48.253553 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:08:48.253564 | orchestrator | 2025-06-05 20:08:48.253574 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:08:48.253585 | orchestrator | Thursday 05 June 2025 20:08:37 +0000 (0:00:00.299) 0:00:06.171 ********* 2025-06-05 20:08:48.253596 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253670 | orchestrator | 2025-06-05 20:08:48.253682 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:08:48.253693 | orchestrator | Thursday 05 June 2025 20:08:38 +0000 (0:00:00.668) 0:00:06.839 ********* 2025-06-05 20:08:48.253704 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253715 | orchestrator | 2025-06-05 20:08:48.253726 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:08:48.253736 | orchestrator | Thursday 05 June 2025 20:08:38 +0000 (0:00:00.238) 0:00:07.078 ********* 2025-06-05 20:08:48.253764 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253775 | orchestrator | 2025-06-05 20:08:48.253786 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:48.253798 | orchestrator | Thursday 05 June 2025 20:08:38 +0000 (0:00:00.246) 0:00:07.324 ********* 2025-06-05 20:08:48.253809 | orchestrator | 2025-06-05 20:08:48.253820 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:48.253830 | orchestrator | Thursday 05 June 2025 20:08:38 +0000 (0:00:00.065) 0:00:07.390 ********* 2025-06-05 20:08:48.253883 | orchestrator | 2025-06-05 20:08:48.253894 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:48.253905 | orchestrator | Thursday 05 June 2025 20:08:39 +0000 (0:00:00.084) 0:00:07.475 ********* 2025-06-05 20:08:48.253916 | orchestrator | 2025-06-05 20:08:48.253927 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:08:48.253937 | orchestrator | Thursday 05 June 2025 20:08:39 +0000 (0:00:00.073) 0:00:07.549 ********* 2025-06-05 20:08:48.253948 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.253959 | orchestrator | 2025-06-05 20:08:48.253970 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-05 20:08:48.253981 | orchestrator | Thursday 05 June 2025 20:08:39 +0000 (0:00:00.253) 0:00:07.803 ********* 2025-06-05 20:08:48.253991 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254002 | orchestrator | 2025-06-05 20:08:48.254091 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-05 20:08:48.254107 | orchestrator | Thursday 05 June 2025 20:08:39 +0000 (0:00:00.238) 0:00:08.042 ********* 2025-06-05 20:08:48.254118 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254129 | orchestrator | 2025-06-05 20:08:48.254140 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-05 20:08:48.254151 | orchestrator | Thursday 05 June 2025 20:08:39 +0000 (0:00:00.116) 0:00:08.158 ********* 2025-06-05 20:08:48.254162 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:08:48.254181 | orchestrator | 2025-06-05 20:08:48.254200 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-05 20:08:48.254220 | orchestrator | Thursday 05 June 2025 20:08:41 +0000 (0:00:01.641) 0:00:09.799 ********* 2025-06-05 20:08:48.254240 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254258 | orchestrator | 2025-06-05 20:08:48.254277 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-05 20:08:48.254292 | orchestrator | Thursday 05 June 2025 20:08:41 +0000 (0:00:00.322) 0:00:10.121 ********* 2025-06-05 20:08:48.254309 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254327 | orchestrator | 2025-06-05 20:08:48.254345 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-05 20:08:48.254374 | orchestrator | Thursday 05 June 2025 20:08:42 +0000 (0:00:00.307) 0:00:10.429 ********* 2025-06-05 20:08:48.254394 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254412 | orchestrator | 2025-06-05 20:08:48.254424 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-05 20:08:48.254435 | orchestrator | Thursday 05 June 2025 20:08:42 +0000 (0:00:00.300) 0:00:10.729 ********* 2025-06-05 20:08:48.254446 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254456 | orchestrator | 2025-06-05 20:08:48.254467 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-05 20:08:48.254478 | orchestrator | Thursday 05 June 2025 20:08:42 +0000 (0:00:00.294) 0:00:11.024 ********* 2025-06-05 20:08:48.254500 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254511 | orchestrator | 2025-06-05 20:08:48.254522 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-05 20:08:48.254533 | orchestrator | Thursday 05 June 2025 20:08:42 +0000 (0:00:00.115) 0:00:11.140 ********* 2025-06-05 20:08:48.254544 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254555 | orchestrator | 2025-06-05 20:08:48.254566 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-05 20:08:48.254577 | orchestrator | Thursday 05 June 2025 20:08:42 +0000 (0:00:00.135) 0:00:11.276 ********* 2025-06-05 20:08:48.254588 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254599 | orchestrator | 2025-06-05 20:08:48.254610 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-05 20:08:48.254621 | orchestrator | Thursday 05 June 2025 20:08:43 +0000 (0:00:00.120) 0:00:11.396 ********* 2025-06-05 20:08:48.254632 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:08:48.254643 | orchestrator | 2025-06-05 20:08:48.254654 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-05 20:08:48.254665 | orchestrator | Thursday 05 June 2025 20:08:44 +0000 (0:00:01.474) 0:00:12.871 ********* 2025-06-05 20:08:48.254676 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254687 | orchestrator | 2025-06-05 20:08:48.254698 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-05 20:08:48.254709 | orchestrator | Thursday 05 June 2025 20:08:44 +0000 (0:00:00.279) 0:00:13.150 ********* 2025-06-05 20:08:48.254719 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254730 | orchestrator | 2025-06-05 20:08:48.254741 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-05 20:08:48.254752 | orchestrator | Thursday 05 June 2025 20:08:44 +0000 (0:00:00.126) 0:00:13.276 ********* 2025-06-05 20:08:48.254763 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:08:48.254774 | orchestrator | 2025-06-05 20:08:48.254785 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-05 20:08:48.254795 | orchestrator | Thursday 05 June 2025 20:08:45 +0000 (0:00:00.137) 0:00:13.413 ********* 2025-06-05 20:08:48.254806 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254817 | orchestrator | 2025-06-05 20:08:48.254828 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-05 20:08:48.254879 | orchestrator | Thursday 05 June 2025 20:08:45 +0000 (0:00:00.129) 0:00:13.543 ********* 2025-06-05 20:08:48.254892 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254903 | orchestrator | 2025-06-05 20:08:48.254914 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-05 20:08:48.254925 | orchestrator | Thursday 05 June 2025 20:08:45 +0000 (0:00:00.321) 0:00:13.865 ********* 2025-06-05 20:08:48.254935 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.254946 | orchestrator | 2025-06-05 20:08:48.254957 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-05 20:08:48.254968 | orchestrator | Thursday 05 June 2025 20:08:45 +0000 (0:00:00.256) 0:00:14.121 ********* 2025-06-05 20:08:48.254979 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:08:48.254990 | orchestrator | 2025-06-05 20:08:48.255001 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:08:48.255011 | orchestrator | Thursday 05 June 2025 20:08:45 +0000 (0:00:00.239) 0:00:14.361 ********* 2025-06-05 20:08:48.255022 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.255033 | orchestrator | 2025-06-05 20:08:48.255044 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:08:48.255055 | orchestrator | Thursday 05 June 2025 20:08:47 +0000 (0:00:01.581) 0:00:15.942 ********* 2025-06-05 20:08:48.255065 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.255076 | orchestrator | 2025-06-05 20:08:48.255087 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:08:48.255105 | orchestrator | Thursday 05 June 2025 20:08:47 +0000 (0:00:00.242) 0:00:16.185 ********* 2025-06-05 20:08:48.255116 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:48.255127 | orchestrator | 2025-06-05 20:08:48.255147 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:50.604614 | orchestrator | Thursday 05 June 2025 20:08:48 +0000 (0:00:00.237) 0:00:16.423 ********* 2025-06-05 20:08:50.604722 | orchestrator | 2025-06-05 20:08:50.604739 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:50.604759 | orchestrator | Thursday 05 June 2025 20:08:48 +0000 (0:00:00.067) 0:00:16.490 ********* 2025-06-05 20:08:50.604777 | orchestrator | 2025-06-05 20:08:50.604796 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:08:50.604813 | orchestrator | Thursday 05 June 2025 20:08:48 +0000 (0:00:00.069) 0:00:16.559 ********* 2025-06-05 20:08:50.604829 | orchestrator | 2025-06-05 20:08:50.604907 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-05 20:08:50.604926 | orchestrator | Thursday 05 June 2025 20:08:48 +0000 (0:00:00.071) 0:00:16.631 ********* 2025-06-05 20:08:50.604946 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:08:50.604966 | orchestrator | 2025-06-05 20:08:50.604985 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:08:50.605001 | orchestrator | Thursday 05 June 2025 20:08:49 +0000 (0:00:01.455) 0:00:18.087 ********* 2025-06-05 20:08:50.605013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-05 20:08:50.605042 | orchestrator |  "msg": [ 2025-06-05 20:08:50.605056 | orchestrator |  "Validator run completed.", 2025-06-05 20:08:50.605068 | orchestrator |  "You can find the report file here:", 2025-06-05 20:08:50.605079 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-05T20:08:32+00:00-report.json", 2025-06-05 20:08:50.605094 | orchestrator |  "on the following host:", 2025-06-05 20:08:50.605107 | orchestrator |  "testbed-manager" 2025-06-05 20:08:50.605120 | orchestrator |  ] 2025-06-05 20:08:50.605133 | orchestrator | } 2025-06-05 20:08:50.605146 | orchestrator | 2025-06-05 20:08:50.605164 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:08:50.605178 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-05 20:08:50.605192 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:50.605205 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:08:50.605217 | orchestrator | 2025-06-05 20:08:50.605231 | orchestrator | 2025-06-05 20:08:50.605244 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:08:50.605257 | orchestrator | Thursday 05 June 2025 20:08:50 +0000 (0:00:00.610) 0:00:18.697 ********* 2025-06-05 20:08:50.605270 | orchestrator | =============================================================================== 2025-06-05 20:08:50.605283 | orchestrator | Get timestamp for report file ------------------------------------------- 1.66s 2025-06-05 20:08:50.605294 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.64s 2025-06-05 20:08:50.605305 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-05 20:08:50.605315 | orchestrator | Gather status data ------------------------------------------------------ 1.48s 2025-06-05 20:08:50.605326 | orchestrator | Write report file ------------------------------------------------------- 1.46s 2025-06-05 20:08:50.605336 | orchestrator | Get container info ------------------------------------------------------ 1.04s 2025-06-05 20:08:50.605347 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-06-05 20:08:50.605383 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-06-05 20:08:50.605394 | orchestrator | Print report file information ------------------------------------------- 0.61s 2025-06-05 20:08:50.605405 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2025-06-05 20:08:50.605416 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-06-05 20:08:50.605426 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.32s 2025-06-05 20:08:50.605437 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.31s 2025-06-05 20:08:50.605448 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-06-05 20:08:50.605458 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.30s 2025-06-05 20:08:50.605469 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.30s 2025-06-05 20:08:50.605480 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-06-05 20:08:50.605490 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.29s 2025-06-05 20:08:50.605501 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-06-05 20:08:50.605512 | orchestrator | Set health test data ---------------------------------------------------- 0.28s 2025-06-05 20:08:50.837248 | orchestrator | + osism validate ceph-mgrs 2025-06-05 20:08:52.494379 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:08:52.494501 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:08:52.494521 | orchestrator | Registering Redlock._release_script 2025-06-05 20:09:11.071911 | orchestrator | 2025-06-05 20:09:11.072025 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-05 20:09:11.072042 | orchestrator | 2025-06-05 20:09:11.072054 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-05 20:09:11.072066 | orchestrator | Thursday 05 June 2025 20:08:56 +0000 (0:00:00.426) 0:00:00.426 ********* 2025-06-05 20:09:11.072078 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.072089 | orchestrator | 2025-06-05 20:09:11.072100 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-05 20:09:11.072112 | orchestrator | Thursday 05 June 2025 20:08:57 +0000 (0:00:00.607) 0:00:01.034 ********* 2025-06-05 20:09:11.072123 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.072134 | orchestrator | 2025-06-05 20:09:11.072145 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-05 20:09:11.072156 | orchestrator | Thursday 05 June 2025 20:08:58 +0000 (0:00:00.802) 0:00:01.837 ********* 2025-06-05 20:09:11.072167 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072180 | orchestrator | 2025-06-05 20:09:11.072192 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-05 20:09:11.072203 | orchestrator | Thursday 05 June 2025 20:08:58 +0000 (0:00:00.249) 0:00:02.086 ********* 2025-06-05 20:09:11.072214 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072225 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:09:11.072236 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:09:11.072247 | orchestrator | 2025-06-05 20:09:11.072258 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-05 20:09:11.072269 | orchestrator | Thursday 05 June 2025 20:08:58 +0000 (0:00:00.280) 0:00:02.367 ********* 2025-06-05 20:09:11.072280 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:09:11.072291 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:09:11.072302 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072313 | orchestrator | 2025-06-05 20:09:11.072325 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-05 20:09:11.072355 | orchestrator | Thursday 05 June 2025 20:08:59 +0000 (0:00:00.961) 0:00:03.328 ********* 2025-06-05 20:09:11.072366 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.072378 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:09:11.072411 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:09:11.072425 | orchestrator | 2025-06-05 20:09:11.072437 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-05 20:09:11.072451 | orchestrator | Thursday 05 June 2025 20:08:59 +0000 (0:00:00.273) 0:00:03.602 ********* 2025-06-05 20:09:11.072464 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072476 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:09:11.072488 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:09:11.072501 | orchestrator | 2025-06-05 20:09:11.072514 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:11.072526 | orchestrator | Thursday 05 June 2025 20:09:00 +0000 (0:00:00.476) 0:00:04.079 ********* 2025-06-05 20:09:11.072539 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072552 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:09:11.072564 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:09:11.072576 | orchestrator | 2025-06-05 20:09:11.072589 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-05 20:09:11.072602 | orchestrator | Thursday 05 June 2025 20:09:00 +0000 (0:00:00.283) 0:00:04.363 ********* 2025-06-05 20:09:11.072615 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.072628 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:09:11.072641 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:09:11.072654 | orchestrator | 2025-06-05 20:09:11.072667 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-05 20:09:11.072680 | orchestrator | Thursday 05 June 2025 20:09:00 +0000 (0:00:00.262) 0:00:04.625 ********* 2025-06-05 20:09:11.072690 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.072701 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:09:11.072712 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:09:11.072722 | orchestrator | 2025-06-05 20:09:11.072733 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:09:11.072744 | orchestrator | Thursday 05 June 2025 20:09:01 +0000 (0:00:00.278) 0:00:04.904 ********* 2025-06-05 20:09:11.072754 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.072765 | orchestrator | 2025-06-05 20:09:11.072776 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:09:11.072786 | orchestrator | Thursday 05 June 2025 20:09:01 +0000 (0:00:00.650) 0:00:05.555 ********* 2025-06-05 20:09:11.072797 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.072808 | orchestrator | 2025-06-05 20:09:11.072818 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:09:11.072829 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.247) 0:00:05.803 ********* 2025-06-05 20:09:11.072862 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.072874 | orchestrator | 2025-06-05 20:09:11.072885 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.072896 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.244) 0:00:06.047 ********* 2025-06-05 20:09:11.072907 | orchestrator | 2025-06-05 20:09:11.072918 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.072928 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.072) 0:00:06.119 ********* 2025-06-05 20:09:11.072939 | orchestrator | 2025-06-05 20:09:11.072950 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.072961 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.087) 0:00:06.207 ********* 2025-06-05 20:09:11.072972 | orchestrator | 2025-06-05 20:09:11.072982 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:09:11.072993 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.075) 0:00:06.283 ********* 2025-06-05 20:09:11.073004 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.073015 | orchestrator | 2025-06-05 20:09:11.073026 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-05 20:09:11.073036 | orchestrator | Thursday 05 June 2025 20:09:02 +0000 (0:00:00.258) 0:00:06.541 ********* 2025-06-05 20:09:11.073047 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.073067 | orchestrator | 2025-06-05 20:09:11.073097 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-05 20:09:11.073109 | orchestrator | Thursday 05 June 2025 20:09:03 +0000 (0:00:00.233) 0:00:06.775 ********* 2025-06-05 20:09:11.073120 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.073131 | orchestrator | 2025-06-05 20:09:11.073142 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-05 20:09:11.073152 | orchestrator | Thursday 05 June 2025 20:09:03 +0000 (0:00:00.119) 0:00:06.894 ********* 2025-06-05 20:09:11.073163 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:09:11.073173 | orchestrator | 2025-06-05 20:09:11.073185 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-05 20:09:11.073195 | orchestrator | Thursday 05 June 2025 20:09:05 +0000 (0:00:02.009) 0:00:08.903 ********* 2025-06-05 20:09:11.073206 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.073217 | orchestrator | 2025-06-05 20:09:11.073228 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-05 20:09:11.073238 | orchestrator | Thursday 05 June 2025 20:09:05 +0000 (0:00:00.242) 0:00:09.146 ********* 2025-06-05 20:09:11.073249 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.073260 | orchestrator | 2025-06-05 20:09:11.073271 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-05 20:09:11.073281 | orchestrator | Thursday 05 June 2025 20:09:06 +0000 (0:00:00.722) 0:00:09.868 ********* 2025-06-05 20:09:11.073292 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.073302 | orchestrator | 2025-06-05 20:09:11.073313 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-05 20:09:11.073324 | orchestrator | Thursday 05 June 2025 20:09:06 +0000 (0:00:00.136) 0:00:10.004 ********* 2025-06-05 20:09:11.073335 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:09:11.073345 | orchestrator | 2025-06-05 20:09:11.073362 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-05 20:09:11.073373 | orchestrator | Thursday 05 June 2025 20:09:06 +0000 (0:00:00.150) 0:00:10.155 ********* 2025-06-05 20:09:11.073383 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.073394 | orchestrator | 2025-06-05 20:09:11.073405 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-05 20:09:11.073416 | orchestrator | Thursday 05 June 2025 20:09:06 +0000 (0:00:00.262) 0:00:10.417 ********* 2025-06-05 20:09:11.073427 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:09:11.073438 | orchestrator | 2025-06-05 20:09:11.073448 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:09:11.073459 | orchestrator | Thursday 05 June 2025 20:09:06 +0000 (0:00:00.226) 0:00:10.644 ********* 2025-06-05 20:09:11.073470 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.073481 | orchestrator | 2025-06-05 20:09:11.073491 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:09:11.073502 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:01.222) 0:00:11.866 ********* 2025-06-05 20:09:11.073512 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.073523 | orchestrator | 2025-06-05 20:09:11.073534 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:09:11.073545 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:00.246) 0:00:12.113 ********* 2025-06-05 20:09:11.073555 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.073566 | orchestrator | 2025-06-05 20:09:11.073577 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.073588 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:00.232) 0:00:12.345 ********* 2025-06-05 20:09:11.073599 | orchestrator | 2025-06-05 20:09:11.073610 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.073621 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:00.072) 0:00:12.418 ********* 2025-06-05 20:09:11.073639 | orchestrator | 2025-06-05 20:09:11.073650 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:11.073661 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:00.077) 0:00:12.495 ********* 2025-06-05 20:09:11.073671 | orchestrator | 2025-06-05 20:09:11.073682 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-05 20:09:11.073693 | orchestrator | Thursday 05 June 2025 20:09:08 +0000 (0:00:00.079) 0:00:12.575 ********* 2025-06-05 20:09:11.073704 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:11.073714 | orchestrator | 2025-06-05 20:09:11.073725 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:09:11.073736 | orchestrator | Thursday 05 June 2025 20:09:10 +0000 (0:00:01.702) 0:00:14.277 ********* 2025-06-05 20:09:11.073746 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-05 20:09:11.073757 | orchestrator |  "msg": [ 2025-06-05 20:09:11.073769 | orchestrator |  "Validator run completed.", 2025-06-05 20:09:11.073780 | orchestrator |  "You can find the report file here:", 2025-06-05 20:09:11.073790 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-05T20:08:57+00:00-report.json", 2025-06-05 20:09:11.073802 | orchestrator |  "on the following host:", 2025-06-05 20:09:11.073813 | orchestrator |  "testbed-manager" 2025-06-05 20:09:11.073824 | orchestrator |  ] 2025-06-05 20:09:11.073835 | orchestrator | } 2025-06-05 20:09:11.073882 | orchestrator | 2025-06-05 20:09:11.073893 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:09:11.073905 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-05 20:09:11.073917 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:09:11.073936 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:09:11.400735 | orchestrator | 2025-06-05 20:09:11.400893 | orchestrator | 2025-06-05 20:09:11.400915 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:09:11.400929 | orchestrator | Thursday 05 June 2025 20:09:11 +0000 (0:00:00.437) 0:00:14.715 ********* 2025-06-05 20:09:11.400941 | orchestrator | =============================================================================== 2025-06-05 20:09:11.400952 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.01s 2025-06-05 20:09:11.400963 | orchestrator | Write report file ------------------------------------------------------- 1.70s 2025-06-05 20:09:11.400974 | orchestrator | Aggregate test results step one ----------------------------------------- 1.22s 2025-06-05 20:09:11.400985 | orchestrator | Get container info ------------------------------------------------------ 0.96s 2025-06-05 20:09:11.400995 | orchestrator | Create report output directory ------------------------------------------ 0.80s 2025-06-05 20:09:11.401006 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.72s 2025-06-05 20:09:11.401016 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-06-05 20:09:11.401027 | orchestrator | Get timestamp for report file ------------------------------------------- 0.61s 2025-06-05 20:09:11.401037 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-06-05 20:09:11.401048 | orchestrator | Print report file information ------------------------------------------- 0.44s 2025-06-05 20:09:11.401059 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-06-05 20:09:11.401069 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-06-05 20:09:11.401080 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.28s 2025-06-05 20:09:11.401091 | orchestrator | Set test result to failed if container is missing ----------------------- 0.27s 2025-06-05 20:09:11.401127 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.26s 2025-06-05 20:09:11.401138 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-06-05 20:09:11.401149 | orchestrator | Print report file information ------------------------------------------- 0.26s 2025-06-05 20:09:11.401160 | orchestrator | Define report vars ------------------------------------------------------ 0.25s 2025-06-05 20:09:11.401171 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-05 20:09:11.401182 | orchestrator | Aggregate test results step two ----------------------------------------- 0.25s 2025-06-05 20:09:11.678185 | orchestrator | + osism validate ceph-osds 2025-06-05 20:09:13.390148 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:09:13.390251 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:09:13.390265 | orchestrator | Registering Redlock._release_script 2025-06-05 20:09:22.555764 | orchestrator | 2025-06-05 20:09:22.555912 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-05 20:09:22.555927 | orchestrator | 2025-06-05 20:09:22.555937 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-05 20:09:22.555947 | orchestrator | Thursday 05 June 2025 20:09:18 +0000 (0:00:00.440) 0:00:00.440 ********* 2025-06-05 20:09:22.555956 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:22.555966 | orchestrator | 2025-06-05 20:09:22.555975 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-05 20:09:22.555985 | orchestrator | Thursday 05 June 2025 20:09:18 +0000 (0:00:00.639) 0:00:01.079 ********* 2025-06-05 20:09:22.555994 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:22.556002 | orchestrator | 2025-06-05 20:09:22.556012 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-05 20:09:22.556021 | orchestrator | Thursday 05 June 2025 20:09:19 +0000 (0:00:00.511) 0:00:01.591 ********* 2025-06-05 20:09:22.556030 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:22.556039 | orchestrator | 2025-06-05 20:09:22.556048 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-05 20:09:22.556056 | orchestrator | Thursday 05 June 2025 20:09:20 +0000 (0:00:01.065) 0:00:02.657 ********* 2025-06-05 20:09:22.556066 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:22.556075 | orchestrator | 2025-06-05 20:09:22.556084 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-05 20:09:22.556093 | orchestrator | Thursday 05 June 2025 20:09:20 +0000 (0:00:00.136) 0:00:02.793 ********* 2025-06-05 20:09:22.556102 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:22.556112 | orchestrator | 2025-06-05 20:09:22.556121 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-05 20:09:22.556130 | orchestrator | Thursday 05 June 2025 20:09:20 +0000 (0:00:00.132) 0:00:02.926 ********* 2025-06-05 20:09:22.556139 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:22.556148 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:22.556158 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:22.556166 | orchestrator | 2025-06-05 20:09:22.556175 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-05 20:09:22.556184 | orchestrator | Thursday 05 June 2025 20:09:20 +0000 (0:00:00.308) 0:00:03.234 ********* 2025-06-05 20:09:22.556193 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:22.556202 | orchestrator | 2025-06-05 20:09:22.556212 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-05 20:09:22.556221 | orchestrator | Thursday 05 June 2025 20:09:20 +0000 (0:00:00.151) 0:00:03.386 ********* 2025-06-05 20:09:22.556229 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:22.556239 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:22.556248 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:22.556257 | orchestrator | 2025-06-05 20:09:22.556266 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-05 20:09:22.556293 | orchestrator | Thursday 05 June 2025 20:09:21 +0000 (0:00:00.326) 0:00:03.713 ********* 2025-06-05 20:09:22.556302 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:22.556311 | orchestrator | 2025-06-05 20:09:22.556322 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:22.556333 | orchestrator | Thursday 05 June 2025 20:09:21 +0000 (0:00:00.534) 0:00:04.247 ********* 2025-06-05 20:09:22.556343 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:22.556353 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:22.556364 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:22.556375 | orchestrator | 2025-06-05 20:09:22.556385 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-05 20:09:22.556395 | orchestrator | Thursday 05 June 2025 20:09:22 +0000 (0:00:00.459) 0:00:04.706 ********* 2025-06-05 20:09:22.556423 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c7866cd74d44641b649723d94000c0dd8029a57a382a5ca41e6a6ee541d7a2bd', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-05 20:09:22.556437 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ae8aa17a705da2497e637f75656de43b93255193ee6ee62874d0ffb5dd14cce4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-05 20:09:22.556452 | orchestrator | skipping: [testbed-node-3] => (item={'id': '847496aeb4f4bf17041320d773127fe71e879a74d100a0e5ead0af5883fe2bae', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-05 20:09:22.556464 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b6ecbfce7d238c3a77f699c72a9f6937152e21e3fa3f4c3d9be03e5115c086fb', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-05 20:09:22.556475 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9647107bad28a48420bc5d7de57cd001983683fba74398af0207c26dd0773ebf', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-05 20:09:22.556501 | orchestrator | skipping: [testbed-node-3] => (item={'id': '62538ee097ea996992ea21caf2442970d77d3abb94c9d6d2b41012c7dad81b3a', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-05 20:09:22.556522 | orchestrator | skipping: [testbed-node-3] => (item={'id': '003e52100db572828e50dde689b84fa6a8e89869885fa3a65f1aa580d96b06b5', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.556533 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a1d31e6730c86399841a22f801be0d18e1d7bcaf8750bde2cbd1b1a6102f878a', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-05 20:09:22.556545 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a6f398cb1a734de3e7c7a90d682c8e88426445bf18b0322c983eb5f7fedecd51', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.556555 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0fa44d82d74644b7acb5afc3ce9305645f22c3a2c1c5ca5b57596637683ffa0c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-05 20:09:22.556568 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e50eeb3b2968e2b780532c6530c83211808fb3c113bc82bc77b64225dd65aae', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-05 20:09:22.556585 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ec1e481e5efe7564062cfdad55043a03f773517dba1880fd55b6abf3b9ede146', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-05 20:09:22.556597 | orchestrator | ok: [testbed-node-3] => (item={'id': '0c9a56620370b3e87d06fbfcd487dd80378c5651042fa3f0c7918d635f87ff3e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:22.556608 | orchestrator | ok: [testbed-node-3] => (item={'id': 'd2c5ddde7a55e08b938299945e49b0a02477a45af1e74120b6d7eb2eac54a1f6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:22.556619 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1b789f436bf8b8f249cd8b3b852ad9845a3588a1cc1f5b97ce668e48ad086f98', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-05 20:09:22.556630 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd3aef7b7c0a5f6c83c3b706f4dfef37776641e672397e4b4a770ab585746c3b0', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:22.556641 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c15989a97ec6b16b1b7cdca87c97f90a9fcb67cbdeb488183bbaaf67513d3961', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:22.556656 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f39f6a16d8bd71bfde5751aea4e67c4d09e109e4b87a3426eee95377256deb85', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.556668 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0b43e074dff8be85722ecc23509aa30b5c281548dd69c3c691e716f9639c960', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.556677 | orchestrator | skipping: [testbed-node-3] => (item={'id': '406c4988ffdf44920ea1b28b63b672c0ea5b68ebbddd5d1be27033ce6d56597e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.556691 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7083b4d8b921cf45c9a89b249a2caf58477737e392c0e0ee028c91c073dc46d0', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-05 20:09:22.809832 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8ba604ede481fdd048886d76d707e44808b12c30c840a4890278ce7a891ea245', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-05 20:09:22.809985 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'cb2516c01cb5c68b3bd7d17385e8b5599e16a396b7e299c0472ebea0e72e8b34', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-05 20:09:22.810001 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e738fcb33a41389f09cb9090c72708b08cb312ad2e3e0ebacf35fef53038a087', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-05 20:09:22.810088 | orchestrator | skipping: [testbed-node-4] => (item={'id': '69b5ba6a92e325f19b8c63cf708fd50e615559e21e23fa96e8c92d274494bdd0', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-05 20:09:22.810102 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'aa7afda2472c8e1dae85b6a79e1b30bd26f0201a1bebe94bfea07c7697afdc10', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-05 20:09:22.810115 | orchestrator | skipping: [testbed-node-4] => (item={'id': '02b9049c5991671fef70d5c5ed49422e5ede55268967eb13332e8ad59e6f2fca', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.810127 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'de17cb1a771b102a91965e8bce8991802956f9c7a5be729dbdb9fcad43b46db5', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-05 20:09:22.810140 | orchestrator | skipping: [testbed-node-4] => (item={'id': '363a3c43903a2a38f3754a9e2f2678b5e752d35de1c591128123c259f31845a7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.810151 | orchestrator | skipping: [testbed-node-4] => (item={'id': '33c8d2753598e66b4d55997758075abdf93b72efec13fe83fa832da50923fd3b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-05 20:09:22.810163 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e44d89efef6569197da176e439bedc12019485767df95f6080858c0eafcd893d', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-05 20:09:22.810175 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b26ca29adf278eae51ced3024e36fbeeb8c1d112392f41156455571606a7ab7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-05 20:09:22.810201 | orchestrator | ok: [testbed-node-4] => (item={'id': 'f0357701ad48992d610284aa6b46208212e7a0fdb58bf6737c913645fbf2f409', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:22.810213 | orchestrator | ok: [testbed-node-4] => (item={'id': '1e8287479d84c9a918dd9ed3405475c498d1ad47558ec0b66a34d26255e124f4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:22.810225 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f1d6128a6894bfb5d2ae9ddadcb3abd8b949c39c44481f11f300b980a0159550', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-05 20:09:22.810253 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6a6f3d0f748fe3efa1519dfb17337a550e1b4525e3c1b5d9555733384e846442', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:22.810265 | orchestrator | skipping: [testbed-node-4] => (item={'id': '08d845fa2401443e1179895f0baf9f96a54cfd719501bae0899d3a72beeb2956', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:22.810276 | orchestrator | skipping: [testbed-node-4] => (item={'id': '312380cca4c795c84f79e03866dc1a0ed58a905bfd442815ea991ee8c83d3bf7', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.810295 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a2fb22d2cfbf9f50a7588455aed002995938be9fa936f949fb0d444bfba1f92e', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.810306 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f040d11dbde003a24071e64b42bc45efcc1e5f12c6a0b042b4b550844e49a401', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:22.810317 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'be230945c4bde67c183184e883a6ad0e2e457e1eeef5a0f09fa773f68c546a6e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-05 20:09:22.810329 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7aa17d44f39f3f06eccdbe8b905edf05c150db0a2cd538c9a0207ed3409e143f', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-05 20:09:22.810340 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ea62034f24f123a7ac68a8781b42fe29ad0377928aa4c3f1e6f36c2568c6d969', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-05 20:09:22.810351 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bb5701ac75933e0d88516c3d6e018550a648177d80a41f737a2cc1aa868d9418', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-05 20:09:22.810362 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd59588ee42de3b6a31d5a48dfff94bb9f0fe519a56c2fb19975c173a2c6e8b60', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-06-05 20:09:22.810373 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b0582aa05e8149f9c2f3c293c4071fa08e3e62ee7505075a02af9f0d31638a8c', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-05 20:09:22.810386 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dc956b8a6d7e0e9aa953ddac7f7b231e89b76a37b2ce64abe68953e05326e5ca', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.810407 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86696ed88ee7b7dcf225d976bfb3442de5fa7265d0a8f0d7557b724d650a719f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-05 20:09:22.810420 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c80d95dd55d7246091fb1fad0e9aaef567aa4589ee023c173fcad2d339bb5c21', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-05 20:09:22.810434 | orchestrator | skipping: [testbed-node-5] => (item={'id': '997332b0dc76179f1804a4ea6c023123e911e608da2f0711c4325b1581bbf0f4', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-05 20:09:22.810454 | orchestrator | skipping: [testbed-node-5] => (item={'id': '631dab1e31cd2e23df0585f78602520e1f72b7b2d060a04d42e8a215575046f9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-05 20:09:30.612512 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a2a01fc2ea8c149f208d11baed6f0ed74cf83b8f01d98cfbd53f02a6f5cec0c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-05 20:09:30.612624 | orchestrator | ok: [testbed-node-5] => (item={'id': 'e0c65ccd3296725d1046aa7270d3cc6245d963caa4d9501eb6df5fcd93a9c6f2', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:30.612640 | orchestrator | ok: [testbed-node-5] => (item={'id': 'aa674483a1b21cfa4799893cfa80ff971fc3ec15d6ff191d4330cd87ff464214', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-05 20:09:30.612653 | orchestrator | skipping: [testbed-node-5] => (item={'id': '41d96d0bf9c5b5c117c2aa0d3668259a3a6d9f32748fadbe8f8c7174f07a5902', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 27 minutes'})  2025-06-05 20:09:30.612667 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fd8e197de619f1a5baa3e1397516f2e653761431226d9553f7ac0f4218fc2df0', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:30.612680 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ebbb9d5990413f9308c8bbffe03286a4d37e6a2a3aeacf56cc9bb2f61f38ede', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-05 20:09:30.612691 | orchestrator | skipping: [testbed-node-5] => (item={'id': '67831d4fb00aa2e9d995b6c383d358adfca70878f0c06c741f87d40805db9b4c', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:30.612703 | orchestrator | skipping: [testbed-node-5] => (item={'id': '06bb64cd15d48cef33e91e5ab5a67ffa59d403c617c487fee14b413815d496b5', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:30.612715 | orchestrator | skipping: [testbed-node-5] => (item={'id': '432d9b4e873ea6f0d333165bdf751983b0844710af30239358d083bdf4a416ff', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-05 20:09:30.612727 | orchestrator | 2025-06-05 20:09:30.612741 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-05 20:09:30.612753 | orchestrator | Thursday 05 June 2025 20:09:22 +0000 (0:00:00.500) 0:00:05.206 ********* 2025-06-05 20:09:30.612764 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.612776 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.612787 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.612798 | orchestrator | 2025-06-05 20:09:30.612809 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-05 20:09:30.612821 | orchestrator | Thursday 05 June 2025 20:09:23 +0000 (0:00:00.285) 0:00:05.492 ********* 2025-06-05 20:09:30.612832 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.612918 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:30.612940 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:30.612953 | orchestrator | 2025-06-05 20:09:30.612981 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-05 20:09:30.612993 | orchestrator | Thursday 05 June 2025 20:09:23 +0000 (0:00:00.499) 0:00:05.992 ********* 2025-06-05 20:09:30.613004 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.613015 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.613026 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.613056 | orchestrator | 2025-06-05 20:09:30.613070 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:30.613082 | orchestrator | Thursday 05 June 2025 20:09:23 +0000 (0:00:00.286) 0:00:06.279 ********* 2025-06-05 20:09:30.613095 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.613108 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.613120 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.613133 | orchestrator | 2025-06-05 20:09:30.613146 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-05 20:09:30.613159 | orchestrator | Thursday 05 June 2025 20:09:24 +0000 (0:00:00.284) 0:00:06.563 ********* 2025-06-05 20:09:30.613173 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-05 20:09:30.613188 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-05 20:09:30.613200 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613213 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-05 20:09:30.613226 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-05 20:09:30.613256 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:30.613270 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-05 20:09:30.613283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-05 20:09:30.613296 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:30.613308 | orchestrator | 2025-06-05 20:09:30.613321 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-05 20:09:30.613334 | orchestrator | Thursday 05 June 2025 20:09:24 +0000 (0:00:00.305) 0:00:06.869 ********* 2025-06-05 20:09:30.613347 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.613359 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.613372 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.613385 | orchestrator | 2025-06-05 20:09:30.613397 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-05 20:09:30.613408 | orchestrator | Thursday 05 June 2025 20:09:24 +0000 (0:00:00.458) 0:00:07.328 ********* 2025-06-05 20:09:30.613419 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613430 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:30.613441 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:30.613452 | orchestrator | 2025-06-05 20:09:30.613462 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-05 20:09:30.613473 | orchestrator | Thursday 05 June 2025 20:09:25 +0000 (0:00:00.277) 0:00:07.605 ********* 2025-06-05 20:09:30.613484 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613495 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:30.613506 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:30.613516 | orchestrator | 2025-06-05 20:09:30.613527 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-05 20:09:30.613538 | orchestrator | Thursday 05 June 2025 20:09:25 +0000 (0:00:00.261) 0:00:07.866 ********* 2025-06-05 20:09:30.613549 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.613560 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.613570 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.613581 | orchestrator | 2025-06-05 20:09:30.613592 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:09:30.613603 | orchestrator | Thursday 05 June 2025 20:09:25 +0000 (0:00:00.310) 0:00:08.177 ********* 2025-06-05 20:09:30.613613 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613624 | orchestrator | 2025-06-05 20:09:30.613635 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:09:30.613646 | orchestrator | Thursday 05 June 2025 20:09:26 +0000 (0:00:00.623) 0:00:08.800 ********* 2025-06-05 20:09:30.613657 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613675 | orchestrator | 2025-06-05 20:09:30.613686 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:09:30.613697 | orchestrator | Thursday 05 June 2025 20:09:26 +0000 (0:00:00.244) 0:00:09.045 ********* 2025-06-05 20:09:30.613707 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613718 | orchestrator | 2025-06-05 20:09:30.613729 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:30.613740 | orchestrator | Thursday 05 June 2025 20:09:26 +0000 (0:00:00.240) 0:00:09.286 ********* 2025-06-05 20:09:30.613750 | orchestrator | 2025-06-05 20:09:30.613761 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:30.613772 | orchestrator | Thursday 05 June 2025 20:09:26 +0000 (0:00:00.065) 0:00:09.352 ********* 2025-06-05 20:09:30.613783 | orchestrator | 2025-06-05 20:09:30.613793 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:30.613804 | orchestrator | Thursday 05 June 2025 20:09:26 +0000 (0:00:00.064) 0:00:09.417 ********* 2025-06-05 20:09:30.613815 | orchestrator | 2025-06-05 20:09:30.613826 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:09:30.613836 | orchestrator | Thursday 05 June 2025 20:09:27 +0000 (0:00:00.069) 0:00:09.486 ********* 2025-06-05 20:09:30.613870 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613881 | orchestrator | 2025-06-05 20:09:30.613892 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-05 20:09:30.613903 | orchestrator | Thursday 05 June 2025 20:09:27 +0000 (0:00:00.223) 0:00:09.710 ********* 2025-06-05 20:09:30.613914 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:30.613924 | orchestrator | 2025-06-05 20:09:30.613935 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:30.613946 | orchestrator | Thursday 05 June 2025 20:09:27 +0000 (0:00:00.231) 0:00:09.941 ********* 2025-06-05 20:09:30.613963 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.613974 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:30.613985 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:30.613995 | orchestrator | 2025-06-05 20:09:30.614006 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-05 20:09:30.614073 | orchestrator | Thursday 05 June 2025 20:09:27 +0000 (0:00:00.283) 0:00:10.225 ********* 2025-06-05 20:09:30.614087 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.614102 | orchestrator | 2025-06-05 20:09:30.614121 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-05 20:09:30.614141 | orchestrator | Thursday 05 June 2025 20:09:28 +0000 (0:00:00.605) 0:00:10.831 ********* 2025-06-05 20:09:30.614168 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-05 20:09:30.614226 | orchestrator | 2025-06-05 20:09:30.614245 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-05 20:09:30.614264 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:01.641) 0:00:12.472 ********* 2025-06-05 20:09:30.614283 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.614302 | orchestrator | 2025-06-05 20:09:30.614320 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-05 20:09:30.614339 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:00.137) 0:00:12.609 ********* 2025-06-05 20:09:30.614357 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:30.614375 | orchestrator | 2025-06-05 20:09:30.614393 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-05 20:09:30.614412 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:00.289) 0:00:12.899 ********* 2025-06-05 20:09:30.614443 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.593577 | orchestrator | 2025-06-05 20:09:42.593688 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-05 20:09:42.593704 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:00.121) 0:00:13.020 ********* 2025-06-05 20:09:42.593715 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.593726 | orchestrator | 2025-06-05 20:09:42.593759 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:42.593770 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:00.108) 0:00:13.128 ********* 2025-06-05 20:09:42.593780 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.593790 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.593799 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.593809 | orchestrator | 2025-06-05 20:09:42.593818 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-05 20:09:42.593828 | orchestrator | Thursday 05 June 2025 20:09:30 +0000 (0:00:00.275) 0:00:13.404 ********* 2025-06-05 20:09:42.593838 | orchestrator | changed: [testbed-node-3] 2025-06-05 20:09:42.593896 | orchestrator | changed: [testbed-node-4] 2025-06-05 20:09:42.593906 | orchestrator | changed: [testbed-node-5] 2025-06-05 20:09:42.593916 | orchestrator | 2025-06-05 20:09:42.593925 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-05 20:09:42.593936 | orchestrator | Thursday 05 June 2025 20:09:33 +0000 (0:00:02.528) 0:00:15.932 ********* 2025-06-05 20:09:42.593945 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.593955 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.593965 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.593988 | orchestrator | 2025-06-05 20:09:42.594008 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-05 20:09:42.594074 | orchestrator | Thursday 05 June 2025 20:09:33 +0000 (0:00:00.291) 0:00:16.223 ********* 2025-06-05 20:09:42.594087 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594097 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594106 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594147 | orchestrator | 2025-06-05 20:09:42.594160 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-05 20:09:42.594172 | orchestrator | Thursday 05 June 2025 20:09:34 +0000 (0:00:00.467) 0:00:16.690 ********* 2025-06-05 20:09:42.594184 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.594195 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:42.594206 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:42.594218 | orchestrator | 2025-06-05 20:09:42.594229 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-05 20:09:42.594239 | orchestrator | Thursday 05 June 2025 20:09:34 +0000 (0:00:00.282) 0:00:16.973 ********* 2025-06-05 20:09:42.594249 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594258 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594268 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594277 | orchestrator | 2025-06-05 20:09:42.594287 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-05 20:09:42.594297 | orchestrator | Thursday 05 June 2025 20:09:35 +0000 (0:00:00.488) 0:00:17.462 ********* 2025-06-05 20:09:42.594306 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.594316 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:42.594325 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:42.594335 | orchestrator | 2025-06-05 20:09:42.594345 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-05 20:09:42.594354 | orchestrator | Thursday 05 June 2025 20:09:35 +0000 (0:00:00.274) 0:00:17.736 ********* 2025-06-05 20:09:42.594364 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.594374 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:42.594383 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:42.594393 | orchestrator | 2025-06-05 20:09:42.594403 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-05 20:09:42.594413 | orchestrator | Thursday 05 June 2025 20:09:35 +0000 (0:00:00.320) 0:00:18.056 ********* 2025-06-05 20:09:42.594422 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594432 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594441 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594451 | orchestrator | 2025-06-05 20:09:42.594461 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-05 20:09:42.594479 | orchestrator | Thursday 05 June 2025 20:09:36 +0000 (0:00:00.483) 0:00:18.540 ********* 2025-06-05 20:09:42.594489 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594498 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594508 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594517 | orchestrator | 2025-06-05 20:09:42.594527 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-05 20:09:42.594537 | orchestrator | Thursday 05 June 2025 20:09:36 +0000 (0:00:00.675) 0:00:19.215 ********* 2025-06-05 20:09:42.594547 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594556 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594566 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594575 | orchestrator | 2025-06-05 20:09:42.594585 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-05 20:09:42.594595 | orchestrator | Thursday 05 June 2025 20:09:37 +0000 (0:00:00.289) 0:00:19.504 ********* 2025-06-05 20:09:42.594604 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.594614 | orchestrator | skipping: [testbed-node-4] 2025-06-05 20:09:42.594623 | orchestrator | skipping: [testbed-node-5] 2025-06-05 20:09:42.594633 | orchestrator | 2025-06-05 20:09:42.594643 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-05 20:09:42.594652 | orchestrator | Thursday 05 June 2025 20:09:37 +0000 (0:00:00.272) 0:00:19.777 ********* 2025-06-05 20:09:42.594662 | orchestrator | ok: [testbed-node-3] 2025-06-05 20:09:42.594671 | orchestrator | ok: [testbed-node-4] 2025-06-05 20:09:42.594681 | orchestrator | ok: [testbed-node-5] 2025-06-05 20:09:42.594690 | orchestrator | 2025-06-05 20:09:42.594700 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-05 20:09:42.594710 | orchestrator | Thursday 05 June 2025 20:09:37 +0000 (0:00:00.321) 0:00:20.098 ********* 2025-06-05 20:09:42.594719 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:42.594729 | orchestrator | 2025-06-05 20:09:42.594739 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-05 20:09:42.594749 | orchestrator | Thursday 05 June 2025 20:09:38 +0000 (0:00:00.657) 0:00:20.756 ********* 2025-06-05 20:09:42.594759 | orchestrator | skipping: [testbed-node-3] 2025-06-05 20:09:42.594769 | orchestrator | 2025-06-05 20:09:42.594795 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-05 20:09:42.594806 | orchestrator | Thursday 05 June 2025 20:09:38 +0000 (0:00:00.250) 0:00:21.006 ********* 2025-06-05 20:09:42.594816 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:42.594826 | orchestrator | 2025-06-05 20:09:42.594836 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-05 20:09:42.594865 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:01.492) 0:00:22.499 ********* 2025-06-05 20:09:42.594875 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:42.594885 | orchestrator | 2025-06-05 20:09:42.594895 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-05 20:09:42.594904 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:00.260) 0:00:22.760 ********* 2025-06-05 20:09:42.594914 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:42.594924 | orchestrator | 2025-06-05 20:09:42.594933 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:42.594943 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:00.238) 0:00:22.998 ********* 2025-06-05 20:09:42.594953 | orchestrator | 2025-06-05 20:09:42.594962 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:42.594972 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:00.065) 0:00:23.064 ********* 2025-06-05 20:09:42.594982 | orchestrator | 2025-06-05 20:09:42.594991 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-05 20:09:42.595001 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:00.064) 0:00:23.129 ********* 2025-06-05 20:09:42.595010 | orchestrator | 2025-06-05 20:09:42.595020 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-05 20:09:42.595036 | orchestrator | Thursday 05 June 2025 20:09:40 +0000 (0:00:00.068) 0:00:23.197 ********* 2025-06-05 20:09:42.595089 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-05 20:09:42.595100 | orchestrator | 2025-06-05 20:09:42.595110 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-05 20:09:42.595120 | orchestrator | Thursday 05 June 2025 20:09:42 +0000 (0:00:01.228) 0:00:24.426 ********* 2025-06-05 20:09:42.595129 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-05 20:09:42.595139 | orchestrator |  "msg": [ 2025-06-05 20:09:42.595149 | orchestrator |  "Validator run completed.", 2025-06-05 20:09:42.595159 | orchestrator |  "You can find the report file here:", 2025-06-05 20:09:42.595169 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-05T20:09:18+00:00-report.json", 2025-06-05 20:09:42.595180 | orchestrator |  "on the following host:", 2025-06-05 20:09:42.595189 | orchestrator |  "testbed-manager" 2025-06-05 20:09:42.595199 | orchestrator |  ] 2025-06-05 20:09:42.595209 | orchestrator | } 2025-06-05 20:09:42.595219 | orchestrator | 2025-06-05 20:09:42.595229 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:09:42.595240 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-05 20:09:42.595252 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-05 20:09:42.595261 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-05 20:09:42.595271 | orchestrator | 2025-06-05 20:09:42.595281 | orchestrator | 2025-06-05 20:09:42.595290 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:09:42.595300 | orchestrator | Thursday 05 June 2025 20:09:42 +0000 (0:00:00.554) 0:00:24.980 ********* 2025-06-05 20:09:42.595310 | orchestrator | =============================================================================== 2025-06-05 20:09:42.595319 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.53s 2025-06-05 20:09:42.595329 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.64s 2025-06-05 20:09:42.595343 | orchestrator | Aggregate test results step one ----------------------------------------- 1.49s 2025-06-05 20:09:42.595353 | orchestrator | Write report file ------------------------------------------------------- 1.23s 2025-06-05 20:09:42.595362 | orchestrator | Create report output directory ------------------------------------------ 1.07s 2025-06-05 20:09:42.595372 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.68s 2025-06-05 20:09:42.595381 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.66s 2025-06-05 20:09:42.595391 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-05 20:09:42.595400 | orchestrator | Aggregate test results step one ----------------------------------------- 0.62s 2025-06-05 20:09:42.595410 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.61s 2025-06-05 20:09:42.595420 | orchestrator | Print report file information ------------------------------------------- 0.55s 2025-06-05 20:09:42.595429 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.53s 2025-06-05 20:09:42.595439 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.51s 2025-06-05 20:09:42.595448 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-06-05 20:09:42.595458 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.50s 2025-06-05 20:09:42.595467 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.49s 2025-06-05 20:09:42.595484 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-05 20:09:42.879794 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2025-06-05 20:09:42.879973 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-06-05 20:09:42.879999 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.46s 2025-06-05 20:09:43.119560 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-05 20:09:43.126675 | orchestrator | + set -e 2025-06-05 20:09:43.126726 | orchestrator | + source /opt/manager-vars.sh 2025-06-05 20:09:43.126741 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-05 20:09:43.126753 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-05 20:09:43.126764 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-05 20:09:43.126775 | orchestrator | ++ CEPH_VERSION=reef 2025-06-05 20:09:43.126786 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-05 20:09:43.126799 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-05 20:09:43.126810 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 20:09:43.126821 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 20:09:43.126833 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-05 20:09:43.126874 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-05 20:09:43.126888 | orchestrator | ++ export ARA=false 2025-06-05 20:09:43.126899 | orchestrator | ++ ARA=false 2025-06-05 20:09:43.126910 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-05 20:09:43.126921 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-05 20:09:43.126932 | orchestrator | ++ export TEMPEST=false 2025-06-05 20:09:43.126943 | orchestrator | ++ TEMPEST=false 2025-06-05 20:09:43.126954 | orchestrator | ++ export IS_ZUUL=true 2025-06-05 20:09:43.126964 | orchestrator | ++ IS_ZUUL=true 2025-06-05 20:09:43.126975 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 20:09:43.126986 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.172 2025-06-05 20:09:43.126997 | orchestrator | ++ export EXTERNAL_API=false 2025-06-05 20:09:43.127008 | orchestrator | ++ EXTERNAL_API=false 2025-06-05 20:09:43.127019 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-05 20:09:43.127029 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-05 20:09:43.127040 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-05 20:09:43.127051 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-05 20:09:43.127062 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-05 20:09:43.127073 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-05 20:09:43.127084 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-05 20:09:43.127095 | orchestrator | + source /etc/os-release 2025-06-05 20:09:43.127106 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-05 20:09:43.127117 | orchestrator | ++ NAME=Ubuntu 2025-06-05 20:09:43.127128 | orchestrator | ++ VERSION_ID=24.04 2025-06-05 20:09:43.127138 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-05 20:09:43.127149 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-05 20:09:43.127160 | orchestrator | ++ ID=ubuntu 2025-06-05 20:09:43.127171 | orchestrator | ++ ID_LIKE=debian 2025-06-05 20:09:43.127182 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-05 20:09:43.127193 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-05 20:09:43.127204 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-05 20:09:43.127215 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-05 20:09:43.127227 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-05 20:09:43.127238 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-05 20:09:43.127249 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-05 20:09:43.127260 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-05 20:09:43.127273 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-05 20:09:43.155149 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-05 20:10:05.451620 | orchestrator | 2025-06-05 20:10:05.451720 | orchestrator | # Status of Elasticsearch 2025-06-05 20:10:05.451736 | orchestrator | 2025-06-05 20:10:05.451748 | orchestrator | + pushd /opt/configuration/contrib 2025-06-05 20:10:05.451759 | orchestrator | + echo 2025-06-05 20:10:05.451769 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-05 20:10:05.451779 | orchestrator | + echo 2025-06-05 20:10:05.451790 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-05 20:10:05.656371 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-05 20:10:05.657302 | orchestrator | 2025-06-05 20:10:05.657338 | orchestrator | # Status of MariaDB 2025-06-05 20:10:05.657352 | orchestrator | 2025-06-05 20:10:05.657364 | orchestrator | + echo 2025-06-05 20:10:05.657376 | orchestrator | + echo '# Status of MariaDB' 2025-06-05 20:10:05.657388 | orchestrator | + echo 2025-06-05 20:10:05.657399 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-05 20:10:05.657411 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-05 20:10:05.731983 | orchestrator | Reading package lists... 2025-06-05 20:10:06.032791 | orchestrator | Building dependency tree... 2025-06-05 20:10:06.034535 | orchestrator | Reading state information... 2025-06-05 20:10:06.388199 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-05 20:10:06.388303 | orchestrator | bc set to manually installed. 2025-06-05 20:10:06.388319 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2025-06-05 20:10:07.011499 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-05 20:10:07.011602 | orchestrator | 2025-06-05 20:10:07.011618 | orchestrator | # Status of Prometheus 2025-06-05 20:10:07.011630 | orchestrator | + echo 2025-06-05 20:10:07.011642 | orchestrator | + echo '# Status of Prometheus' 2025-06-05 20:10:07.011653 | orchestrator | + echo 2025-06-05 20:10:07.012604 | orchestrator | 2025-06-05 20:10:07.012629 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-05 20:10:07.074804 | orchestrator | Unauthorized 2025-06-05 20:10:07.078214 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-05 20:10:07.138806 | orchestrator | Unauthorized 2025-06-05 20:10:07.142911 | orchestrator | 2025-06-05 20:10:07.142947 | orchestrator | # Status of RabbitMQ 2025-06-05 20:10:07.142961 | orchestrator | 2025-06-05 20:10:07.142973 | orchestrator | + echo 2025-06-05 20:10:07.142985 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-05 20:10:07.142997 | orchestrator | + echo 2025-06-05 20:10:07.143010 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-05 20:10:07.601094 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-05 20:10:07.610814 | orchestrator | 2025-06-05 20:10:07.610890 | orchestrator | # Status of Redis 2025-06-05 20:10:07.610906 | orchestrator | 2025-06-05 20:10:07.610918 | orchestrator | + echo 2025-06-05 20:10:07.610930 | orchestrator | + echo '# Status of Redis' 2025-06-05 20:10:07.610943 | orchestrator | + echo 2025-06-05 20:10:07.610956 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-05 20:10:07.617048 | orchestrator | TCP OK - 0.003 second response time on 192.168.16.10 port 6379|time=0.002838s;;;0.000000;10.000000 2025-06-05 20:10:07.617926 | orchestrator | 2025-06-05 20:10:07.617964 | orchestrator | # Create backup of MariaDB database 2025-06-05 20:10:07.617979 | orchestrator | 2025-06-05 20:10:07.617992 | orchestrator | + popd 2025-06-05 20:10:07.618004 | orchestrator | + echo 2025-06-05 20:10:07.618061 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-05 20:10:07.618074 | orchestrator | + echo 2025-06-05 20:10:07.618085 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-05 20:10:09.374679 | orchestrator | 2025-06-05 20:10:09 | INFO  | Task 48cef0ab-dae9-45b2-9a41-9160da367d07 (mariadb_backup) was prepared for execution. 2025-06-05 20:10:09.375573 | orchestrator | 2025-06-05 20:10:09 | INFO  | It takes a moment until task 48cef0ab-dae9-45b2-9a41-9160da367d07 (mariadb_backup) has been started and output is visible here. 2025-06-05 20:10:13.444288 | orchestrator | 2025-06-05 20:10:13.445053 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-05 20:10:13.446579 | orchestrator | 2025-06-05 20:10:13.449009 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-05 20:10:13.449957 | orchestrator | Thursday 05 June 2025 20:10:13 +0000 (0:00:00.181) 0:00:00.181 ********* 2025-06-05 20:10:13.634992 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:10:13.771944 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:10:13.772300 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:10:13.773026 | orchestrator | 2025-06-05 20:10:13.774774 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-05 20:10:13.776123 | orchestrator | Thursday 05 June 2025 20:10:13 +0000 (0:00:00.332) 0:00:00.514 ********* 2025-06-05 20:10:14.424181 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-05 20:10:14.425090 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-05 20:10:14.425134 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-05 20:10:14.425153 | orchestrator | 2025-06-05 20:10:14.425173 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-05 20:10:14.425192 | orchestrator | 2025-06-05 20:10:14.425224 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-05 20:10:14.425242 | orchestrator | Thursday 05 June 2025 20:10:14 +0000 (0:00:00.650) 0:00:01.164 ********* 2025-06-05 20:10:14.803225 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-05 20:10:14.804132 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-05 20:10:14.805514 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-05 20:10:14.806376 | orchestrator | 2025-06-05 20:10:14.808771 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-05 20:10:14.808809 | orchestrator | Thursday 05 June 2025 20:10:14 +0000 (0:00:00.379) 0:00:01.544 ********* 2025-06-05 20:10:15.318101 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-05 20:10:15.318742 | orchestrator | 2025-06-05 20:10:15.320186 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-05 20:10:15.322536 | orchestrator | Thursday 05 June 2025 20:10:15 +0000 (0:00:00.515) 0:00:02.060 ********* 2025-06-05 20:10:18.462823 | orchestrator | ok: [testbed-node-2] 2025-06-05 20:10:18.463604 | orchestrator | ok: [testbed-node-1] 2025-06-05 20:10:18.465491 | orchestrator | ok: [testbed-node-0] 2025-06-05 20:10:18.466703 | orchestrator | 2025-06-05 20:10:18.467548 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-05 20:10:18.468191 | orchestrator | Thursday 05 June 2025 20:10:18 +0000 (0:00:03.141) 0:00:05.201 ********* 2025-06-05 20:11:16.356867 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-05 20:11:16.357026 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-05 20:11:16.357042 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-05 20:11:16.357056 | orchestrator | mariadb_bootstrap_restart 2025-06-05 20:11:16.425049 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:11:16.425697 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:11:16.427771 | orchestrator | changed: [testbed-node-0] 2025-06-05 20:11:16.428025 | orchestrator | 2025-06-05 20:11:16.428962 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-05 20:11:16.429766 | orchestrator | skipping: no hosts matched 2025-06-05 20:11:16.430675 | orchestrator | 2025-06-05 20:11:16.431581 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-05 20:11:16.432043 | orchestrator | skipping: no hosts matched 2025-06-05 20:11:16.432939 | orchestrator | 2025-06-05 20:11:16.434269 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-05 20:11:16.434350 | orchestrator | skipping: no hosts matched 2025-06-05 20:11:16.435008 | orchestrator | 2025-06-05 20:11:16.435724 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-05 20:11:16.436417 | orchestrator | 2025-06-05 20:11:16.438688 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-05 20:11:16.438729 | orchestrator | Thursday 05 June 2025 20:11:16 +0000 (0:00:57.966) 0:01:03.167 ********* 2025-06-05 20:11:16.603967 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:11:16.713375 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:11:16.714226 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:11:16.717799 | orchestrator | 2025-06-05 20:11:16.717825 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-05 20:11:16.717838 | orchestrator | Thursday 05 June 2025 20:11:16 +0000 (0:00:00.288) 0:01:03.456 ********* 2025-06-05 20:11:17.055473 | orchestrator | skipping: [testbed-node-0] 2025-06-05 20:11:17.097743 | orchestrator | skipping: [testbed-node-1] 2025-06-05 20:11:17.098625 | orchestrator | skipping: [testbed-node-2] 2025-06-05 20:11:17.099510 | orchestrator | 2025-06-05 20:11:17.100279 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:11:17.101165 | orchestrator | 2025-06-05 20:11:17 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 20:11:17.101786 | orchestrator | 2025-06-05 20:11:17 | INFO  | Please wait and do not abort execution. 2025-06-05 20:11:17.102818 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-05 20:11:17.103225 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 20:11:17.103781 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-05 20:11:17.104249 | orchestrator | 2025-06-05 20:11:17.104725 | orchestrator | 2025-06-05 20:11:17.105429 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:11:17.106287 | orchestrator | Thursday 05 June 2025 20:11:17 +0000 (0:00:00.385) 0:01:03.841 ********* 2025-06-05 20:11:17.106747 | orchestrator | =============================================================================== 2025-06-05 20:11:17.107101 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 57.97s 2025-06-05 20:11:17.107440 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.14s 2025-06-05 20:11:17.107738 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.65s 2025-06-05 20:11:17.108121 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.52s 2025-06-05 20:11:17.108670 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2025-06-05 20:11:17.109381 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.38s 2025-06-05 20:11:17.109803 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-05 20:11:17.110378 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-06-05 20:11:17.590227 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-05 20:11:17.596251 | orchestrator | + set -e 2025-06-05 20:11:17.596325 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-05 20:11:17.596341 | orchestrator | ++ export INTERACTIVE=false 2025-06-05 20:11:17.596355 | orchestrator | ++ INTERACTIVE=false 2025-06-05 20:11:17.596366 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-05 20:11:17.596377 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-05 20:11:17.596398 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-05 20:11:17.597275 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-05 20:11:17.603156 | orchestrator | 2025-06-05 20:11:17.603184 | orchestrator | # OpenStack endpoints 2025-06-05 20:11:17.603196 | orchestrator | 2025-06-05 20:11:17.603208 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-05 20:11:17.603220 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-05 20:11:17.603232 | orchestrator | + export OS_CLOUD=admin 2025-06-05 20:11:17.603243 | orchestrator | + OS_CLOUD=admin 2025-06-05 20:11:17.603255 | orchestrator | + echo 2025-06-05 20:11:17.603267 | orchestrator | + echo '# OpenStack endpoints' 2025-06-05 20:11:17.603278 | orchestrator | + echo 2025-06-05 20:11:17.603289 | orchestrator | + openstack endpoint list 2025-06-05 20:11:20.956252 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-05 20:11:20.956359 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-05 20:11:20.956397 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-05 20:11:20.956409 | orchestrator | | 0bb844f2f0184acf8dcfefdee89c061f | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-05 20:11:20.956420 | orchestrator | | 0bd8e79c4bf745d195ee5dd4b4069dc5 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-05 20:11:20.956431 | orchestrator | | 13070e72bde84fdba8679217262f58d4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-05 20:11:20.956442 | orchestrator | | 1fe7daf3a2744392a6c942e7ae2e337a | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-05 20:11:20.956453 | orchestrator | | 2a1fb6cb01804964b7b373740ffda137 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-05 20:11:20.956464 | orchestrator | | 2c22b6b8c8f9486aa161d142546472b9 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-05 20:11:20.956475 | orchestrator | | 3208ecf24e594b9b8d6baec2fc352ddd | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-05 20:11:20.956485 | orchestrator | | 3733882317f74590ba4051b943e3b572 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-05 20:11:20.956496 | orchestrator | | 5cbc06f1dfc14364b8f5ffd73face0f4 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-05 20:11:20.956507 | orchestrator | | 624a8ebb9b4347679c3f7ce690c415dc | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-05 20:11:20.956517 | orchestrator | | 68e37121c916431f8da6a7bee9e19c32 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-05 20:11:20.956528 | orchestrator | | 7bd9fea2224041cdbef00ef189a3066c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-05 20:11:20.956539 | orchestrator | | 880c350c11764fe6a732d82da8c4d6bf | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-05 20:11:20.956550 | orchestrator | | 941e92e3f1f5463997dd01d7fdeb6100 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-05 20:11:20.956561 | orchestrator | | 998f3df3f68d408d993c8e27a4c02c25 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-05 20:11:20.956572 | orchestrator | | a4f694b4dfba4a20bf17f8ff5cba6077 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-05 20:11:20.956583 | orchestrator | | a9e7cce2fbf241cd9887ce1f6851b539 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-05 20:11:20.956594 | orchestrator | | ac5082933f254537b17a67b7e5e3415c | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-05 20:11:20.956604 | orchestrator | | b1a66f2f70a74287aa50602f6f2cbbd4 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-05 20:11:20.956638 | orchestrator | | b55974c9b2484fe693c9f64c67053531 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-05 20:11:20.956667 | orchestrator | | d12691d8b7d24e91adac0e6810f34ddc | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-05 20:11:20.956679 | orchestrator | | e3f6deddd65544c6adc4c6657c195452 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-05 20:11:20.956690 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-05 20:11:21.240394 | orchestrator | 2025-06-05 20:11:21.240498 | orchestrator | # Cinder 2025-06-05 20:11:21.240513 | orchestrator | 2025-06-05 20:11:21.240526 | orchestrator | + echo 2025-06-05 20:11:21.240537 | orchestrator | + echo '# Cinder' 2025-06-05 20:11:21.240549 | orchestrator | + echo 2025-06-05 20:11:21.240561 | orchestrator | + openstack volume service list 2025-06-05 20:11:24.333694 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:24.333811 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-05 20:11:24.333827 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:24.333839 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-05T20:11:18.000000 | 2025-06-05 20:11:24.333850 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-05T20:11:19.000000 | 2025-06-05 20:11:24.333861 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-05T20:11:18.000000 | 2025-06-05 20:11:24.333871 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-05T20:11:17.000000 | 2025-06-05 20:11:24.333882 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-05T20:11:18.000000 | 2025-06-05 20:11:24.333977 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-05T20:11:19.000000 | 2025-06-05 20:11:24.333993 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-05T20:11:16.000000 | 2025-06-05 20:11:24.334004 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-05T20:11:17.000000 | 2025-06-05 20:11:24.334069 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-05T20:11:17.000000 | 2025-06-05 20:11:24.334082 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:24.715219 | orchestrator | 2025-06-05 20:11:24.715319 | orchestrator | # Neutron 2025-06-05 20:11:24.715334 | orchestrator | 2025-06-05 20:11:24.715346 | orchestrator | + echo 2025-06-05 20:11:24.715358 | orchestrator | + echo '# Neutron' 2025-06-05 20:11:24.715371 | orchestrator | + echo 2025-06-05 20:11:24.715382 | orchestrator | + openstack network agent list 2025-06-05 20:11:27.974689 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-05 20:11:27.974823 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-05 20:11:27.974840 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-05 20:11:27.974852 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974863 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974898 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974964 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974976 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974987 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-05 20:11:27.974998 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-05 20:11:27.975009 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-05 20:11:27.975020 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-05 20:11:27.975030 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-05 20:11:28.230285 | orchestrator | + openstack network service provider list 2025-06-05 20:11:30.850097 | orchestrator | +---------------+------+---------+ 2025-06-05 20:11:30.850203 | orchestrator | | Service Type | Name | Default | 2025-06-05 20:11:30.850217 | orchestrator | +---------------+------+---------+ 2025-06-05 20:11:30.850229 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-05 20:11:30.850240 | orchestrator | +---------------+------+---------+ 2025-06-05 20:11:31.285613 | orchestrator | 2025-06-05 20:11:31.285716 | orchestrator | # Nova 2025-06-05 20:11:31.285732 | orchestrator | 2025-06-05 20:11:31.285745 | orchestrator | + echo 2025-06-05 20:11:31.285756 | orchestrator | + echo '# Nova' 2025-06-05 20:11:31.285768 | orchestrator | + echo 2025-06-05 20:11:31.285780 | orchestrator | + openstack compute service list 2025-06-05 20:11:34.514422 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:34.514527 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-05 20:11:34.514541 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:34.514554 | orchestrator | | cd433b8d-ddaa-4612-8faf-5d7068c6d4e6 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-05T20:11:33.000000 | 2025-06-05 20:11:34.514566 | orchestrator | | 4bb31814-12d2-4075-a4d9-507a73243aef | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-05T20:11:29.000000 | 2025-06-05 20:11:34.514577 | orchestrator | | fb3cac8a-3743-4eb4-9710-7e76b8f91ebd | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-05T20:11:31.000000 | 2025-06-05 20:11:34.514589 | orchestrator | | fd1a716f-8e03-4b10-be1c-b5bb800d432c | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-05T20:11:34.000000 | 2025-06-05 20:11:34.514600 | orchestrator | | eed1299e-09ce-4bc7-acd7-d2b3a63a714b | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-05T20:11:25.000000 | 2025-06-05 20:11:34.514611 | orchestrator | | 247f2045-59cf-422e-bc16-92bdd4693108 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-05T20:11:26.000000 | 2025-06-05 20:11:34.514640 | orchestrator | | 913c1100-558e-4e04-b6bb-b37ba44361e4 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-05T20:11:26.000000 | 2025-06-05 20:11:34.514652 | orchestrator | | 7b98028c-6448-4744-8b1b-062132e028de | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-05T20:11:26.000000 | 2025-06-05 20:11:34.514663 | orchestrator | | 4cd13bae-2686-4343-9cf3-c87046889d11 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-05T20:11:26.000000 | 2025-06-05 20:11:34.514697 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-05 20:11:34.771545 | orchestrator | + openstack hypervisor list 2025-06-05 20:11:39.583577 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-05 20:11:39.583717 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-05 20:11:39.583734 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-05 20:11:39.583746 | orchestrator | | 5b3ab44f-f9de-4503-b4d5-729ff6771b30 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-05 20:11:39.583758 | orchestrator | | e8a59267-5178-43da-b7fb-a433cf92f09e | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-05 20:11:39.583769 | orchestrator | | 0d54e904-44c9-44ae-97e0-73a26e206148 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-05 20:11:39.583780 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-05 20:11:39.820663 | orchestrator | 2025-06-05 20:11:39.820784 | orchestrator | # Run OpenStack test play 2025-06-05 20:11:39.820798 | orchestrator | 2025-06-05 20:11:39.820810 | orchestrator | + echo 2025-06-05 20:11:39.820822 | orchestrator | + echo '# Run OpenStack test play' 2025-06-05 20:11:39.820833 | orchestrator | + echo 2025-06-05 20:11:39.820844 | orchestrator | + osism apply --environment openstack test 2025-06-05 20:11:41.539677 | orchestrator | 2025-06-05 20:11:41 | INFO  | Trying to run play test in environment openstack 2025-06-05 20:11:41.545623 | orchestrator | Registering Redlock._acquired_script 2025-06-05 20:11:41.545657 | orchestrator | Registering Redlock._extend_script 2025-06-05 20:11:41.545670 | orchestrator | Registering Redlock._release_script 2025-06-05 20:11:41.603502 | orchestrator | 2025-06-05 20:11:41 | INFO  | Task 15b48ffd-0699-403a-a078-4af79f0f4d9a (test) was prepared for execution. 2025-06-05 20:11:41.603592 | orchestrator | 2025-06-05 20:11:41 | INFO  | It takes a moment until task 15b48ffd-0699-403a-a078-4af79f0f4d9a (test) has been started and output is visible here. 2025-06-05 20:11:45.513509 | orchestrator | 2025-06-05 20:11:45.513746 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-05 20:11:45.514230 | orchestrator | 2025-06-05 20:11:45.515141 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-05 20:11:45.519036 | orchestrator | Thursday 05 June 2025 20:11:45 +0000 (0:00:00.081) 0:00:00.081 ********* 2025-06-05 20:11:49.045460 | orchestrator | changed: [localhost] 2025-06-05 20:11:49.045605 | orchestrator | 2025-06-05 20:11:49.047031 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-05 20:11:49.049470 | orchestrator | Thursday 05 June 2025 20:11:49 +0000 (0:00:03.533) 0:00:03.615 ********* 2025-06-05 20:11:53.365595 | orchestrator | changed: [localhost] 2025-06-05 20:11:53.365727 | orchestrator | 2025-06-05 20:11:53.366472 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-05 20:11:53.366837 | orchestrator | Thursday 05 June 2025 20:11:53 +0000 (0:00:04.320) 0:00:07.935 ********* 2025-06-05 20:11:59.321446 | orchestrator | changed: [localhost] 2025-06-05 20:11:59.321566 | orchestrator | 2025-06-05 20:11:59.321925 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-05 20:11:59.323037 | orchestrator | Thursday 05 June 2025 20:11:59 +0000 (0:00:05.956) 0:00:13.891 ********* 2025-06-05 20:12:03.261199 | orchestrator | changed: [localhost] 2025-06-05 20:12:03.261637 | orchestrator | 2025-06-05 20:12:03.262341 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-05 20:12:03.262667 | orchestrator | Thursday 05 June 2025 20:12:03 +0000 (0:00:03.939) 0:00:17.831 ********* 2025-06-05 20:12:07.363382 | orchestrator | changed: [localhost] 2025-06-05 20:12:07.364334 | orchestrator | 2025-06-05 20:12:07.365239 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-05 20:12:07.367013 | orchestrator | Thursday 05 June 2025 20:12:07 +0000 (0:00:04.100) 0:00:21.932 ********* 2025-06-05 20:12:19.232202 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-05 20:12:19.232375 | orchestrator | changed: [localhost] => (item=member) 2025-06-05 20:12:19.232400 | orchestrator | changed: [localhost] => (item=creator) 2025-06-05 20:12:19.233077 | orchestrator | 2025-06-05 20:12:19.234422 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-05 20:12:19.235118 | orchestrator | Thursday 05 June 2025 20:12:19 +0000 (0:00:11.868) 0:00:33.801 ********* 2025-06-05 20:12:23.380823 | orchestrator | changed: [localhost] 2025-06-05 20:12:23.381248 | orchestrator | 2025-06-05 20:12:23.382169 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-05 20:12:23.382631 | orchestrator | Thursday 05 June 2025 20:12:23 +0000 (0:00:04.149) 0:00:37.950 ********* 2025-06-05 20:12:28.509269 | orchestrator | changed: [localhost] 2025-06-05 20:12:28.509381 | orchestrator | 2025-06-05 20:12:28.511059 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-05 20:12:28.511521 | orchestrator | Thursday 05 June 2025 20:12:28 +0000 (0:00:05.126) 0:00:43.077 ********* 2025-06-05 20:12:33.784379 | orchestrator | changed: [localhost] 2025-06-05 20:12:33.784490 | orchestrator | 2025-06-05 20:12:33.784531 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-05 20:12:33.784741 | orchestrator | Thursday 05 June 2025 20:12:33 +0000 (0:00:05.275) 0:00:48.352 ********* 2025-06-05 20:12:38.003833 | orchestrator | changed: [localhost] 2025-06-05 20:12:38.004046 | orchestrator | 2025-06-05 20:12:38.004090 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-05 20:12:38.004105 | orchestrator | Thursday 05 June 2025 20:12:37 +0000 (0:00:04.211) 0:00:52.563 ********* 2025-06-05 20:12:42.690292 | orchestrator | changed: [localhost] 2025-06-05 20:12:42.690478 | orchestrator | 2025-06-05 20:12:42.690704 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-05 20:12:42.691091 | orchestrator | Thursday 05 June 2025 20:12:42 +0000 (0:00:04.682) 0:00:57.245 ********* 2025-06-05 20:12:46.583679 | orchestrator | changed: [localhost] 2025-06-05 20:12:46.584213 | orchestrator | 2025-06-05 20:12:46.585269 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-05 20:12:46.586205 | orchestrator | Thursday 05 June 2025 20:12:46 +0000 (0:00:03.904) 0:01:01.150 ********* 2025-06-05 20:13:02.516737 | orchestrator | changed: [localhost] 2025-06-05 20:13:02.516882 | orchestrator | 2025-06-05 20:13:02.516899 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-05 20:13:02.517322 | orchestrator | Thursday 05 June 2025 20:13:02 +0000 (0:00:15.935) 0:01:17.085 ********* 2025-06-05 20:15:15.377906 | orchestrator | changed: [localhost] => (item=test) 2025-06-05 20:15:15.378088 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-05 20:15:15.378140 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-05 20:15:15.378161 | orchestrator | 2025-06-05 20:15:15.379636 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-05 20:15:45.381567 | orchestrator | 2025-06-05 20:15:45.381697 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-05 20:16:15.379572 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-05 20:16:15.379700 | orchestrator | 2025-06-05 20:16:15.379719 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-05 20:16:29.525427 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-05 20:16:29.525552 | orchestrator | 2025-06-05 20:16:29.525569 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-05 20:16:29.525583 | orchestrator | Thursday 05 June 2025 20:16:29 +0000 (0:03:27.003) 0:04:44.089 ********* 2025-06-05 20:16:52.455346 | orchestrator | changed: [localhost] => (item=test) 2025-06-05 20:16:52.455466 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-05 20:16:52.455481 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-05 20:16:52.455493 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-05 20:16:52.455529 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-05 20:16:52.455541 | orchestrator | 2025-06-05 20:16:52.455555 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-05 20:16:52.455714 | orchestrator | Thursday 05 June 2025 20:16:52 +0000 (0:00:22.930) 0:05:07.020 ********* 2025-06-05 20:17:23.896506 | orchestrator | changed: [localhost] => (item=test) 2025-06-05 20:17:23.896621 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-05 20:17:23.896638 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-05 20:17:23.896650 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-05 20:17:23.896661 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-05 20:17:23.897372 | orchestrator | 2025-06-05 20:17:23.897779 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-05 20:17:23.898385 | orchestrator | Thursday 05 June 2025 20:17:23 +0000 (0:00:31.442) 0:05:38.463 ********* 2025-06-05 20:17:30.559009 | orchestrator | changed: [localhost] 2025-06-05 20:17:30.559125 | orchestrator | 2025-06-05 20:17:30.559728 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-05 20:17:30.560573 | orchestrator | Thursday 05 June 2025 20:17:30 +0000 (0:00:06.666) 0:05:45.129 ********* 2025-06-05 20:17:44.397777 | orchestrator | changed: [localhost] 2025-06-05 20:17:44.397899 | orchestrator | 2025-06-05 20:17:44.397916 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-05 20:17:44.397930 | orchestrator | Thursday 05 June 2025 20:17:44 +0000 (0:00:13.834) 0:05:58.963 ********* 2025-06-05 20:17:49.731345 | orchestrator | ok: [localhost] 2025-06-05 20:17:49.732603 | orchestrator | 2025-06-05 20:17:49.733607 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-05 20:17:49.734175 | orchestrator | Thursday 05 June 2025 20:17:49 +0000 (0:00:05.337) 0:06:04.301 ********* 2025-06-05 20:17:49.765360 | orchestrator | ok: [localhost] => { 2025-06-05 20:17:49.765695 | orchestrator |  "msg": "192.168.112.198" 2025-06-05 20:17:49.766068 | orchestrator | } 2025-06-05 20:17:49.767214 | orchestrator | 2025-06-05 20:17:49.767910 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-05 20:17:49.768481 | orchestrator | 2025-06-05 20:17:49 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-05 20:17:49.768506 | orchestrator | 2025-06-05 20:17:49 | INFO  | Please wait and do not abort execution. 2025-06-05 20:17:49.769728 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-05 20:17:49.771883 | orchestrator | 2025-06-05 20:17:49.774432 | orchestrator | 2025-06-05 20:17:49.775655 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-05 20:17:49.777738 | orchestrator | Thursday 05 June 2025 20:17:49 +0000 (0:00:00.035) 0:06:04.337 ********* 2025-06-05 20:17:49.778208 | orchestrator | =============================================================================== 2025-06-05 20:17:49.778889 | orchestrator | Create test instances ------------------------------------------------- 207.00s 2025-06-05 20:17:49.780832 | orchestrator | Add tag to instances --------------------------------------------------- 31.44s 2025-06-05 20:17:49.782611 | orchestrator | Add metadata to instances ---------------------------------------------- 22.93s 2025-06-05 20:17:49.782635 | orchestrator | Create test network topology ------------------------------------------- 15.94s 2025-06-05 20:17:49.783479 | orchestrator | Attach test volume ----------------------------------------------------- 13.83s 2025-06-05 20:17:49.784530 | orchestrator | Add member roles to user test ------------------------------------------ 11.87s 2025-06-05 20:17:49.784570 | orchestrator | Create test volume ------------------------------------------------------ 6.67s 2025-06-05 20:17:49.785328 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.96s 2025-06-05 20:17:49.785713 | orchestrator | Create floating ip address ---------------------------------------------- 5.34s 2025-06-05 20:17:49.786443 | orchestrator | Add rule to ssh security group ------------------------------------------ 5.28s 2025-06-05 20:17:49.786584 | orchestrator | Create ssh security group ----------------------------------------------- 5.13s 2025-06-05 20:17:49.787267 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.68s 2025-06-05 20:17:49.789758 | orchestrator | Create test-admin user -------------------------------------------------- 4.32s 2025-06-05 20:17:49.789780 | orchestrator | Create icmp security group ---------------------------------------------- 4.21s 2025-06-05 20:17:49.789791 | orchestrator | Create test server group ------------------------------------------------ 4.15s 2025-06-05 20:17:49.789802 | orchestrator | Create test user -------------------------------------------------------- 4.10s 2025-06-05 20:17:49.789813 | orchestrator | Create test project ----------------------------------------------------- 3.94s 2025-06-05 20:17:49.789823 | orchestrator | Create test keypair ----------------------------------------------------- 3.90s 2025-06-05 20:17:49.789834 | orchestrator | Create test domain ------------------------------------------------------ 3.53s 2025-06-05 20:17:49.789845 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-05 20:17:50.265641 | orchestrator | + server_list 2025-06-05 20:17:50.265744 | orchestrator | + openstack --os-cloud test server list 2025-06-05 20:17:53.891551 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-05 20:17:53.891658 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-05 20:17:53.891674 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-05 20:17:53.891686 | orchestrator | | 636f8b68-2a63-485b-9057-e0f141c94758 | test-4 | ACTIVE | auto_allocated_network=10.42.0.24, 192.168.112.199 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-05 20:17:53.891698 | orchestrator | | e62f6831-6bd5-43a7-bf70-644c41580b1a | test-3 | ACTIVE | auto_allocated_network=10.42.0.6, 192.168.112.125 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-05 20:17:53.891709 | orchestrator | | 46eb682c-a037-47fb-81df-83379eac6c3f | test-2 | ACTIVE | auto_allocated_network=10.42.0.59, 192.168.112.166 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-05 20:17:53.891720 | orchestrator | | abf71b29-7d2c-4bed-9057-b43c9931becb | test-1 | ACTIVE | auto_allocated_network=10.42.0.18, 192.168.112.186 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-05 20:17:53.891731 | orchestrator | | 99be6630-e62c-4bcf-a269-18be035093c2 | test | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.198 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-05 20:17:53.891742 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-05 20:17:54.148764 | orchestrator | + openstack --os-cloud test server show test 2025-06-05 20:17:57.628771 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:17:57.628887 | orchestrator | | Field | Value | 2025-06-05 20:17:57.628911 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:17:57.628931 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-05 20:17:57.628974 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-05 20:17:57.628994 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-05 20:17:57.629025 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-05 20:17:57.629046 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-05 20:17:57.629066 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-05 20:17:57.629083 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-05 20:17:57.629094 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-05 20:17:57.629123 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-05 20:17:57.629135 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-05 20:17:57.629147 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-05 20:17:57.629166 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-05 20:17:57.629177 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-05 20:17:57.629193 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-05 20:17:57.629204 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-05 20:17:57.629215 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-05T20:13:32.000000 | 2025-06-05 20:17:57.629227 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-05 20:17:57.629238 | orchestrator | | accessIPv4 | | 2025-06-05 20:17:57.629249 | orchestrator | | accessIPv6 | | 2025-06-05 20:17:57.629260 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.198 | 2025-06-05 20:17:57.629315 | orchestrator | | config_drive | | 2025-06-05 20:17:57.629330 | orchestrator | | created | 2025-06-05T20:13:10Z | 2025-06-05 20:17:57.629351 | orchestrator | | description | None | 2025-06-05 20:17:57.629364 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-05 20:17:57.629377 | orchestrator | | hostId | 5f63868bed56be71c45603feec156cf10b5b93d86be154f1c0ef0293 | 2025-06-05 20:17:57.629394 | orchestrator | | host_status | None | 2025-06-05 20:17:57.629408 | orchestrator | | id | 99be6630-e62c-4bcf-a269-18be035093c2 | 2025-06-05 20:17:57.629421 | orchestrator | | image | Cirros 0.6.2 (d6c00530-3550-4ac8-9515-19b2d5743f4f) | 2025-06-05 20:17:57.629433 | orchestrator | | key_name | test | 2025-06-05 20:17:57.629446 | orchestrator | | locked | False | 2025-06-05 20:17:57.629459 | orchestrator | | locked_reason | None | 2025-06-05 20:17:57.629472 | orchestrator | | name | test | 2025-06-05 20:17:57.629492 | orchestrator | | pinned_availability_zone | None | 2025-06-05 20:17:57.629512 | orchestrator | | progress | 0 | 2025-06-05 20:17:57.629525 | orchestrator | | project_id | d77b98bafa754f00a29f7999085343b8 | 2025-06-05 20:17:57.629537 | orchestrator | | properties | hostname='test' | 2025-06-05 20:17:57.629555 | orchestrator | | security_groups | name='icmp' | 2025-06-05 20:17:57.629568 | orchestrator | | | name='ssh' | 2025-06-05 20:17:57.629581 | orchestrator | | server_groups | None | 2025-06-05 20:17:57.629594 | orchestrator | | status | ACTIVE | 2025-06-05 20:17:57.629607 | orchestrator | | tags | test | 2025-06-05 20:17:57.629620 | orchestrator | | trusted_image_certificates | None | 2025-06-05 20:17:57.629633 | orchestrator | | updated | 2025-06-05T20:16:34Z | 2025-06-05 20:17:57.629653 | orchestrator | | user_id | d711c6268a85478c97d58beb6a8cdd4c | 2025-06-05 20:17:57.629672 | orchestrator | | volumes_attached | delete_on_termination='False', id='fdb6dd9e-5278-403d-aa48-7295746d2913' | 2025-06-05 20:17:57.633318 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:17:57.892775 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-05 20:18:01.059462 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:01.059572 | orchestrator | | Field | Value | 2025-06-05 20:18:01.059603 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:01.059616 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-05 20:18:01.059628 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-05 20:18:01.059639 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-05 20:18:01.059650 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-05 20:18:01.059662 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-05 20:18:01.059692 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-05 20:18:01.059703 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-05 20:18:01.059714 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-05 20:18:01.059743 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-05 20:18:01.059755 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-05 20:18:01.059767 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-05 20:18:01.059778 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-05 20:18:01.059789 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-05 20:18:01.059807 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-05 20:18:01.059819 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-05 20:18:01.059830 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-05T20:14:17.000000 | 2025-06-05 20:18:01.059848 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-05 20:18:01.059860 | orchestrator | | accessIPv4 | | 2025-06-05 20:18:01.059871 | orchestrator | | accessIPv6 | | 2025-06-05 20:18:01.059882 | orchestrator | | addresses | auto_allocated_network=10.42.0.18, 192.168.112.186 | 2025-06-05 20:18:01.059901 | orchestrator | | config_drive | | 2025-06-05 20:18:01.059913 | orchestrator | | created | 2025-06-05T20:13:55Z | 2025-06-05 20:18:01.059929 | orchestrator | | description | None | 2025-06-05 20:18:01.059940 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-05 20:18:01.059951 | orchestrator | | hostId | a0c2f517fa14110b59740e766dec831714add4964568a51c7ce58090 | 2025-06-05 20:18:01.059963 | orchestrator | | host_status | None | 2025-06-05 20:18:01.059974 | orchestrator | | id | abf71b29-7d2c-4bed-9057-b43c9931becb | 2025-06-05 20:18:01.059992 | orchestrator | | image | Cirros 0.6.2 (d6c00530-3550-4ac8-9515-19b2d5743f4f) | 2025-06-05 20:18:01.060004 | orchestrator | | key_name | test | 2025-06-05 20:18:01.060015 | orchestrator | | locked | False | 2025-06-05 20:18:01.060026 | orchestrator | | locked_reason | None | 2025-06-05 20:18:01.060037 | orchestrator | | name | test-1 | 2025-06-05 20:18:01.060054 | orchestrator | | pinned_availability_zone | None | 2025-06-05 20:18:01.060066 | orchestrator | | progress | 0 | 2025-06-05 20:18:01.060082 | orchestrator | | project_id | d77b98bafa754f00a29f7999085343b8 | 2025-06-05 20:18:01.060093 | orchestrator | | properties | hostname='test-1' | 2025-06-05 20:18:01.060104 | orchestrator | | security_groups | name='icmp' | 2025-06-05 20:18:01.060115 | orchestrator | | | name='ssh' | 2025-06-05 20:18:01.060133 | orchestrator | | server_groups | None | 2025-06-05 20:18:01.060144 | orchestrator | | status | ACTIVE | 2025-06-05 20:18:01.060157 | orchestrator | | tags | test | 2025-06-05 20:18:01.060168 | orchestrator | | trusted_image_certificates | None | 2025-06-05 20:18:01.060179 | orchestrator | | updated | 2025-06-05T20:16:38Z | 2025-06-05 20:18:01.060195 | orchestrator | | user_id | d711c6268a85478c97d58beb6a8cdd4c | 2025-06-05 20:18:01.060207 | orchestrator | | volumes_attached | | 2025-06-05 20:18:01.066331 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:01.390846 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-05 20:18:04.530488 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:04.530562 | orchestrator | | Field | Value | 2025-06-05 20:18:04.530588 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:04.530593 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-05 20:18:04.530597 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-05 20:18:04.530601 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-05 20:18:04.530605 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-05 20:18:04.530609 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-05 20:18:04.530613 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-05 20:18:04.530617 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-05 20:18:04.530621 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-05 20:18:04.530646 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-05 20:18:04.530650 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-05 20:18:04.530658 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-05 20:18:04.530662 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-05 20:18:04.530666 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-05 20:18:04.530669 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-05 20:18:04.530673 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-05 20:18:04.530677 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-05T20:14:56.000000 | 2025-06-05 20:18:04.530681 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-05 20:18:04.530685 | orchestrator | | accessIPv4 | | 2025-06-05 20:18:04.530689 | orchestrator | | accessIPv6 | | 2025-06-05 20:18:04.530693 | orchestrator | | addresses | auto_allocated_network=10.42.0.59, 192.168.112.166 | 2025-06-05 20:18:04.530700 | orchestrator | | config_drive | | 2025-06-05 20:18:04.530709 | orchestrator | | created | 2025-06-05T20:14:34Z | 2025-06-05 20:18:04.530713 | orchestrator | | description | None | 2025-06-05 20:18:04.530717 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-05 20:18:04.530721 | orchestrator | | hostId | 43680c8ad00abaa804b12b4fe0ecd635b69b0d1c416582b31a12a728 | 2025-06-05 20:18:04.530725 | orchestrator | | host_status | None | 2025-06-05 20:18:04.530728 | orchestrator | | id | 46eb682c-a037-47fb-81df-83379eac6c3f | 2025-06-05 20:18:04.530732 | orchestrator | | image | Cirros 0.6.2 (d6c00530-3550-4ac8-9515-19b2d5743f4f) | 2025-06-05 20:18:04.530736 | orchestrator | | key_name | test | 2025-06-05 20:18:04.530740 | orchestrator | | locked | False | 2025-06-05 20:18:04.530744 | orchestrator | | locked_reason | None | 2025-06-05 20:18:04.530761 | orchestrator | | name | test-2 | 2025-06-05 20:18:04.530768 | orchestrator | | pinned_availability_zone | None | 2025-06-05 20:18:04.530772 | orchestrator | | progress | 0 | 2025-06-05 20:18:04.530776 | orchestrator | | project_id | d77b98bafa754f00a29f7999085343b8 | 2025-06-05 20:18:04.530780 | orchestrator | | properties | hostname='test-2' | 2025-06-05 20:18:04.530784 | orchestrator | | security_groups | name='icmp' | 2025-06-05 20:18:04.530788 | orchestrator | | | name='ssh' | 2025-06-05 20:18:04.530792 | orchestrator | | server_groups | None | 2025-06-05 20:18:04.530796 | orchestrator | | status | ACTIVE | 2025-06-05 20:18:04.530799 | orchestrator | | tags | test | 2025-06-05 20:18:04.530803 | orchestrator | | trusted_image_certificates | None | 2025-06-05 20:18:04.530811 | orchestrator | | updated | 2025-06-05T20:16:43Z | 2025-06-05 20:18:04.530819 | orchestrator | | user_id | d711c6268a85478c97d58beb6a8cdd4c | 2025-06-05 20:18:04.530823 | orchestrator | | volumes_attached | | 2025-06-05 20:18:04.534835 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:04.799712 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-05 20:18:08.056100 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:08.056213 | orchestrator | | Field | Value | 2025-06-05 20:18:08.056229 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:08.056242 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-05 20:18:08.056254 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-05 20:18:08.056265 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-05 20:18:08.056277 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-05 20:18:08.056375 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-05 20:18:08.056405 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-05 20:18:08.056417 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-05 20:18:08.056429 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-05 20:18:08.056460 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-05 20:18:08.056472 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-05 20:18:08.056483 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-05 20:18:08.056495 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-05 20:18:08.056506 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-05 20:18:08.056517 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-05 20:18:08.056528 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-05 20:18:08.056553 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-05T20:15:35.000000 | 2025-06-05 20:18:08.056573 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-05 20:18:08.056597 | orchestrator | | accessIPv4 | | 2025-06-05 20:18:08.056618 | orchestrator | | accessIPv6 | | 2025-06-05 20:18:08.056640 | orchestrator | | addresses | auto_allocated_network=10.42.0.6, 192.168.112.125 | 2025-06-05 20:18:08.056664 | orchestrator | | config_drive | | 2025-06-05 20:18:08.056678 | orchestrator | | created | 2025-06-05T20:15:18Z | 2025-06-05 20:18:08.056692 | orchestrator | | description | None | 2025-06-05 20:18:08.056725 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-05 20:18:08.056739 | orchestrator | | hostId | 5f63868bed56be71c45603feec156cf10b5b93d86be154f1c0ef0293 | 2025-06-05 20:18:08.056761 | orchestrator | | host_status | None | 2025-06-05 20:18:08.056774 | orchestrator | | id | e62f6831-6bd5-43a7-bf70-644c41580b1a | 2025-06-05 20:18:08.056787 | orchestrator | | image | Cirros 0.6.2 (d6c00530-3550-4ac8-9515-19b2d5743f4f) | 2025-06-05 20:18:08.056800 | orchestrator | | key_name | test | 2025-06-05 20:18:08.056826 | orchestrator | | locked | False | 2025-06-05 20:18:08.056846 | orchestrator | | locked_reason | None | 2025-06-05 20:18:08.056866 | orchestrator | | name | test-3 | 2025-06-05 20:18:08.056887 | orchestrator | | pinned_availability_zone | None | 2025-06-05 20:18:08.056899 | orchestrator | | progress | 0 | 2025-06-05 20:18:08.056910 | orchestrator | | project_id | d77b98bafa754f00a29f7999085343b8 | 2025-06-05 20:18:08.056921 | orchestrator | | properties | hostname='test-3' | 2025-06-05 20:18:08.056940 | orchestrator | | security_groups | name='icmp' | 2025-06-05 20:18:08.056952 | orchestrator | | | name='ssh' | 2025-06-05 20:18:08.056963 | orchestrator | | server_groups | None | 2025-06-05 20:18:08.056974 | orchestrator | | status | ACTIVE | 2025-06-05 20:18:08.056985 | orchestrator | | tags | test | 2025-06-05 20:18:08.057002 | orchestrator | | trusted_image_certificates | None | 2025-06-05 20:18:08.057013 | orchestrator | | updated | 2025-06-05T20:16:47Z | 2025-06-05 20:18:08.057030 | orchestrator | | user_id | d711c6268a85478c97d58beb6a8cdd4c | 2025-06-05 20:18:08.057041 | orchestrator | | volumes_attached | | 2025-06-05 20:18:08.060329 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:08.284950 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-05 20:18:11.615880 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:11.616012 | orchestrator | | Field | Value | 2025-06-05 20:18:11.616029 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:11.616041 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-05 20:18:11.616054 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-05 20:18:11.616066 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-05 20:18:11.616078 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-05 20:18:11.616090 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-05 20:18:11.616102 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-05 20:18:11.616114 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-05 20:18:11.616126 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-05 20:18:11.616175 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-05 20:18:11.616196 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-05 20:18:11.616208 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-05 20:18:11.616220 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-05 20:18:11.616232 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-05 20:18:11.616244 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-05 20:18:11.616256 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-05 20:18:11.616272 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-05T20:16:14.000000 | 2025-06-05 20:18:11.616284 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-05 20:18:11.616359 | orchestrator | | accessIPv4 | | 2025-06-05 20:18:11.616371 | orchestrator | | accessIPv6 | | 2025-06-05 20:18:11.616390 | orchestrator | | addresses | auto_allocated_network=10.42.0.24, 192.168.112.199 | 2025-06-05 20:18:11.616409 | orchestrator | | config_drive | | 2025-06-05 20:18:11.616421 | orchestrator | | created | 2025-06-05T20:15:56Z | 2025-06-05 20:18:11.616432 | orchestrator | | description | None | 2025-06-05 20:18:11.616443 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-05 20:18:11.616454 | orchestrator | | hostId | a0c2f517fa14110b59740e766dec831714add4964568a51c7ce58090 | 2025-06-05 20:18:11.616464 | orchestrator | | host_status | None | 2025-06-05 20:18:11.616481 | orchestrator | | id | 636f8b68-2a63-485b-9057-e0f141c94758 | 2025-06-05 20:18:11.616492 | orchestrator | | image | Cirros 0.6.2 (d6c00530-3550-4ac8-9515-19b2d5743f4f) | 2025-06-05 20:18:11.616503 | orchestrator | | key_name | test | 2025-06-05 20:18:11.616514 | orchestrator | | locked | False | 2025-06-05 20:18:11.616532 | orchestrator | | locked_reason | None | 2025-06-05 20:18:11.616543 | orchestrator | | name | test-4 | 2025-06-05 20:18:11.616560 | orchestrator | | pinned_availability_zone | None | 2025-06-05 20:18:11.616571 | orchestrator | | progress | 0 | 2025-06-05 20:18:11.616582 | orchestrator | | project_id | d77b98bafa754f00a29f7999085343b8 | 2025-06-05 20:18:11.616593 | orchestrator | | properties | hostname='test-4' | 2025-06-05 20:18:11.616604 | orchestrator | | security_groups | name='icmp' | 2025-06-05 20:18:11.616614 | orchestrator | | | name='ssh' | 2025-06-05 20:18:11.616630 | orchestrator | | server_groups | None | 2025-06-05 20:18:11.616641 | orchestrator | | status | ACTIVE | 2025-06-05 20:18:11.616652 | orchestrator | | tags | test | 2025-06-05 20:18:11.616669 | orchestrator | | trusted_image_certificates | None | 2025-06-05 20:18:11.616680 | orchestrator | | updated | 2025-06-05T20:16:52Z | 2025-06-05 20:18:11.616696 | orchestrator | | user_id | d711c6268a85478c97d58beb6a8cdd4c | 2025-06-05 20:18:11.616711 | orchestrator | | volumes_attached | | 2025-06-05 20:18:11.624325 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-05 20:18:11.858545 | orchestrator | + server_ping 2025-06-05 20:18:11.860160 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-05 20:18:11.860370 | orchestrator | ++ tr -d '\r' 2025-06-05 20:18:14.823000 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-05 20:18:14.823101 | orchestrator | + ping -c3 192.168.112.198 2025-06-05 20:18:14.845531 | orchestrator | PING 192.168.112.198 (192.168.112.198) 56(84) bytes of data. 2025-06-05 20:18:14.845620 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=1 ttl=63 time=14.3 ms 2025-06-05 20:18:15.835830 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=2 ttl=63 time=3.14 ms 2025-06-05 20:18:16.837086 | orchestrator | 64 bytes from 192.168.112.198: icmp_seq=3 ttl=63 time=2.30 ms 2025-06-05 20:18:16.837190 | orchestrator | 2025-06-05 20:18:16.837206 | orchestrator | --- 192.168.112.198 ping statistics --- 2025-06-05 20:18:16.837219 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-05 20:18:16.837231 | orchestrator | rtt min/avg/max/mdev = 2.304/6.597/14.348/5.491 ms 2025-06-05 20:18:16.837243 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-05 20:18:16.837255 | orchestrator | + ping -c3 192.168.112.166 2025-06-05 20:18:16.847181 | orchestrator | PING 192.168.112.166 (192.168.112.166) 56(84) bytes of data. 2025-06-05 20:18:16.847243 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=1 ttl=63 time=7.74 ms 2025-06-05 20:18:17.843877 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=2 ttl=63 time=2.22 ms 2025-06-05 20:18:18.846009 | orchestrator | 64 bytes from 192.168.112.166: icmp_seq=3 ttl=63 time=2.13 ms 2025-06-05 20:18:18.846182 | orchestrator | 2025-06-05 20:18:18.846209 | orchestrator | --- 192.168.112.166 ping statistics --- 2025-06-05 20:18:18.846228 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-05 20:18:18.846248 | orchestrator | rtt min/avg/max/mdev = 2.130/4.029/7.740/2.624 ms 2025-06-05 20:18:18.846269 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-05 20:18:18.846366 | orchestrator | + ping -c3 192.168.112.186 2025-06-05 20:18:18.858701 | orchestrator | PING 192.168.112.186 (192.168.112.186) 56(84) bytes of data. 2025-06-05 20:18:18.858781 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=1 ttl=63 time=7.38 ms 2025-06-05 20:18:19.856411 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=2 ttl=63 time=3.09 ms 2025-06-05 20:18:20.856477 | orchestrator | 64 bytes from 192.168.112.186: icmp_seq=3 ttl=63 time=1.86 ms 2025-06-05 20:18:20.856590 | orchestrator | 2025-06-05 20:18:20.856606 | orchestrator | --- 192.168.112.186 ping statistics --- 2025-06-05 20:18:20.856619 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-05 20:18:20.856630 | orchestrator | rtt min/avg/max/mdev = 1.860/4.112/7.384/2.367 ms 2025-06-05 20:18:20.857536 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-05 20:18:20.857560 | orchestrator | + ping -c3 192.168.112.125 2025-06-05 20:18:20.869515 | orchestrator | PING 192.168.112.125 (192.168.112.125) 56(84) bytes of data. 2025-06-05 20:18:20.869560 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=1 ttl=63 time=7.07 ms 2025-06-05 20:18:21.866441 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=2 ttl=63 time=3.46 ms 2025-06-05 20:18:22.867670 | orchestrator | 64 bytes from 192.168.112.125: icmp_seq=3 ttl=63 time=2.31 ms 2025-06-05 20:18:22.867771 | orchestrator | 2025-06-05 20:18:22.867788 | orchestrator | --- 192.168.112.125 ping statistics --- 2025-06-05 20:18:22.867801 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-05 20:18:22.867813 | orchestrator | rtt min/avg/max/mdev = 2.306/4.281/7.074/2.030 ms 2025-06-05 20:18:22.867825 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-05 20:18:22.867837 | orchestrator | + ping -c3 192.168.112.199 2025-06-05 20:18:22.879614 | orchestrator | PING 192.168.112.199 (192.168.112.199) 56(84) bytes of data. 2025-06-05 20:18:22.879646 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=1 ttl=63 time=7.02 ms 2025-06-05 20:18:23.876117 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=2 ttl=63 time=3.04 ms 2025-06-05 20:18:24.874819 | orchestrator | 64 bytes from 192.168.112.199: icmp_seq=3 ttl=63 time=2.12 ms 2025-06-05 20:18:24.874928 | orchestrator | 2025-06-05 20:18:24.874954 | orchestrator | --- 192.168.112.199 ping statistics --- 2025-06-05 20:18:24.874975 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2000ms 2025-06-05 20:18:24.874994 | orchestrator | rtt min/avg/max/mdev = 2.123/4.058/7.017/2.124 ms 2025-06-05 20:18:24.876710 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-05 20:18:25.022140 | orchestrator | ok: Runtime: 0:10:36.550218 2025-06-05 20:18:25.074395 | 2025-06-05 20:18:25.074582 | TASK [Run tempest] 2025-06-05 20:18:25.615476 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:25.632221 | 2025-06-05 20:18:25.632379 | TASK [Check prometheus alert status] 2025-06-05 20:18:26.171020 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:26.173985 | 2025-06-05 20:18:26.174150 | PLAY RECAP 2025-06-05 20:18:26.174333 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-05 20:18:26.174419 | 2025-06-05 20:18:26.412888 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-05 20:18:26.415941 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-05 20:18:27.182702 | 2025-06-05 20:18:27.182944 | PLAY [Post output play] 2025-06-05 20:18:27.199065 | 2025-06-05 20:18:27.199195 | LOOP [stage-output : Register sources] 2025-06-05 20:18:27.270203 | 2025-06-05 20:18:27.270537 | TASK [stage-output : Check sudo] 2025-06-05 20:18:28.180192 | orchestrator | sudo: a password is required 2025-06-05 20:18:28.321461 | orchestrator | ok: Runtime: 0:00:00.054675 2025-06-05 20:18:28.336263 | 2025-06-05 20:18:28.336426 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-05 20:18:28.376608 | 2025-06-05 20:18:28.376932 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-05 20:18:28.465035 | orchestrator | ok 2025-06-05 20:18:28.474454 | 2025-06-05 20:18:28.474612 | LOOP [stage-output : Ensure target folders exist] 2025-06-05 20:18:28.936207 | orchestrator | ok: "docs" 2025-06-05 20:18:28.936633 | 2025-06-05 20:18:29.214255 | orchestrator | ok: "artifacts" 2025-06-05 20:18:29.470329 | orchestrator | ok: "logs" 2025-06-05 20:18:29.497015 | 2025-06-05 20:18:29.497205 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-05 20:18:29.533863 | 2025-06-05 20:18:29.534126 | TASK [stage-output : Make all log files readable] 2025-06-05 20:18:29.818706 | orchestrator | ok 2025-06-05 20:18:29.829825 | 2025-06-05 20:18:29.829969 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-05 20:18:29.854897 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:29.874650 | 2025-06-05 20:18:29.874898 | TASK [stage-output : Discover log files for compression] 2025-06-05 20:18:29.891352 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:29.904411 | 2025-06-05 20:18:29.904551 | LOOP [stage-output : Archive everything from logs] 2025-06-05 20:18:29.936714 | 2025-06-05 20:18:29.936911 | PLAY [Post cleanup play] 2025-06-05 20:18:29.945060 | 2025-06-05 20:18:29.945176 | TASK [Set cloud fact (Zuul deployment)] 2025-06-05 20:18:30.005559 | orchestrator | ok 2025-06-05 20:18:30.016509 | 2025-06-05 20:18:30.016665 | TASK [Set cloud fact (local deployment)] 2025-06-05 20:18:30.061279 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:30.077264 | 2025-06-05 20:18:30.077429 | TASK [Clean the cloud environment] 2025-06-05 20:18:30.870852 | orchestrator | 2025-06-05 20:18:30 - clean up servers 2025-06-05 20:18:31.620989 | orchestrator | 2025-06-05 20:18:31 - testbed-manager 2025-06-05 20:18:31.703783 | orchestrator | 2025-06-05 20:18:31 - testbed-node-3 2025-06-05 20:18:31.806879 | orchestrator | 2025-06-05 20:18:31 - testbed-node-0 2025-06-05 20:18:31.890754 | orchestrator | 2025-06-05 20:18:31 - testbed-node-2 2025-06-05 20:18:31.978670 | orchestrator | 2025-06-05 20:18:31 - testbed-node-5 2025-06-05 20:18:32.076033 | orchestrator | 2025-06-05 20:18:32 - testbed-node-1 2025-06-05 20:18:32.165993 | orchestrator | 2025-06-05 20:18:32 - testbed-node-4 2025-06-05 20:18:32.264748 | orchestrator | 2025-06-05 20:18:32 - clean up keypairs 2025-06-05 20:18:32.282204 | orchestrator | 2025-06-05 20:18:32 - testbed 2025-06-05 20:18:32.306221 | orchestrator | 2025-06-05 20:18:32 - wait for servers to be gone 2025-06-05 20:18:43.183576 | orchestrator | 2025-06-05 20:18:43 - clean up ports 2025-06-05 20:18:43.367009 | orchestrator | 2025-06-05 20:18:43 - 0c0b7b55-2b33-4afb-8f42-672f056cd168 2025-06-05 20:18:43.788239 | orchestrator | 2025-06-05 20:18:43 - 28dcff1e-e723-4031-90b9-4e3f1ae69422 2025-06-05 20:18:44.088302 | orchestrator | 2025-06-05 20:18:44 - 3b75b404-faa7-45bf-acdb-d9324bc6c2c6 2025-06-05 20:18:44.329130 | orchestrator | 2025-06-05 20:18:44 - 5a72bd44-44a9-4455-b290-572a36674019 2025-06-05 20:18:44.549781 | orchestrator | 2025-06-05 20:18:44 - 74226855-117a-4604-9535-fa4bd2352ae2 2025-06-05 20:18:44.756588 | orchestrator | 2025-06-05 20:18:44 - cb0dc8f9-716f-4655-9e2e-b52a47770132 2025-06-05 20:18:44.953593 | orchestrator | 2025-06-05 20:18:44 - fb31edf7-c24c-49fd-a8ec-b6810822a2cd 2025-06-05 20:18:45.155578 | orchestrator | 2025-06-05 20:18:45 - clean up volumes 2025-06-05 20:18:45.272975 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-5-node-base 2025-06-05 20:18:45.314077 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-3-node-base 2025-06-05 20:18:45.353733 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-4-node-base 2025-06-05 20:18:45.395690 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-manager-base 2025-06-05 20:18:45.435820 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-1-node-base 2025-06-05 20:18:45.481256 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-0-node-base 2025-06-05 20:18:45.525574 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-2-node-base 2025-06-05 20:18:45.564758 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-7-node-4 2025-06-05 20:18:45.611074 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-8-node-5 2025-06-05 20:18:45.653731 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-0-node-3 2025-06-05 20:18:45.695757 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-6-node-3 2025-06-05 20:18:45.740727 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-5-node-5 2025-06-05 20:18:45.781938 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-4-node-4 2025-06-05 20:18:45.827015 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-3-node-3 2025-06-05 20:18:45.869009 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-2-node-5 2025-06-05 20:18:45.914685 | orchestrator | 2025-06-05 20:18:45 - testbed-volume-1-node-4 2025-06-05 20:18:45.954517 | orchestrator | 2025-06-05 20:18:45 - disconnect routers 2025-06-05 20:18:46.075922 | orchestrator | 2025-06-05 20:18:46 - testbed 2025-06-05 20:18:47.565239 | orchestrator | 2025-06-05 20:18:47 - clean up subnets 2025-06-05 20:18:47.617756 | orchestrator | 2025-06-05 20:18:47 - subnet-testbed-management 2025-06-05 20:18:48.184363 | orchestrator | 2025-06-05 20:18:48 - clean up networks 2025-06-05 20:18:48.353177 | orchestrator | 2025-06-05 20:18:48 - net-testbed-management 2025-06-05 20:18:48.699622 | orchestrator | 2025-06-05 20:18:48 - clean up security groups 2025-06-05 20:18:48.753003 | orchestrator | 2025-06-05 20:18:48 - testbed-management 2025-06-05 20:18:48.867388 | orchestrator | 2025-06-05 20:18:48 - testbed-node 2025-06-05 20:18:48.980623 | orchestrator | 2025-06-05 20:18:48 - clean up floating ips 2025-06-05 20:18:49.014775 | orchestrator | 2025-06-05 20:18:49 - 81.163.193.172 2025-06-05 20:18:49.337179 | orchestrator | 2025-06-05 20:18:49 - clean up routers 2025-06-05 20:18:49.431679 | orchestrator | 2025-06-05 20:18:49 - testbed 2025-06-05 20:18:51.137915 | orchestrator | ok: Runtime: 0:00:20.420928 2025-06-05 20:18:51.143601 | 2025-06-05 20:18:51.143860 | PLAY RECAP 2025-06-05 20:18:51.144038 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-05 20:18:51.144107 | 2025-06-05 20:18:51.297353 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-05 20:18:51.299968 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-05 20:18:52.029411 | 2025-06-05 20:18:52.029582 | PLAY [Cleanup play] 2025-06-05 20:18:52.046114 | 2025-06-05 20:18:52.046265 | TASK [Set cloud fact (Zuul deployment)] 2025-06-05 20:18:52.099964 | orchestrator | ok 2025-06-05 20:18:52.106814 | 2025-06-05 20:18:52.106982 | TASK [Set cloud fact (local deployment)] 2025-06-05 20:18:52.151503 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:52.169529 | 2025-06-05 20:18:52.169711 | TASK [Clean the cloud environment] 2025-06-05 20:18:53.308408 | orchestrator | 2025-06-05 20:18:53 - clean up servers 2025-06-05 20:18:53.779485 | orchestrator | 2025-06-05 20:18:53 - clean up keypairs 2025-06-05 20:18:53.798982 | orchestrator | 2025-06-05 20:18:53 - wait for servers to be gone 2025-06-05 20:18:53.843465 | orchestrator | 2025-06-05 20:18:53 - clean up ports 2025-06-05 20:18:53.922295 | orchestrator | 2025-06-05 20:18:53 - clean up volumes 2025-06-05 20:18:53.983642 | orchestrator | 2025-06-05 20:18:53 - disconnect routers 2025-06-05 20:18:54.013844 | orchestrator | 2025-06-05 20:18:54 - clean up subnets 2025-06-05 20:18:54.034243 | orchestrator | 2025-06-05 20:18:54 - clean up networks 2025-06-05 20:18:54.177144 | orchestrator | 2025-06-05 20:18:54 - clean up security groups 2025-06-05 20:18:54.214234 | orchestrator | 2025-06-05 20:18:54 - clean up floating ips 2025-06-05 20:18:54.238368 | orchestrator | 2025-06-05 20:18:54 - clean up routers 2025-06-05 20:18:54.710638 | orchestrator | ok: Runtime: 0:00:01.322958 2025-06-05 20:18:54.715007 | 2025-06-05 20:18:54.715181 | PLAY RECAP 2025-06-05 20:18:54.715306 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-05 20:18:54.715372 | 2025-06-05 20:18:54.847627 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-05 20:18:54.848684 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-05 20:18:55.624636 | 2025-06-05 20:18:55.624821 | PLAY [Base post-fetch] 2025-06-05 20:18:55.640567 | 2025-06-05 20:18:55.640715 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-05 20:18:55.706447 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:55.720638 | 2025-06-05 20:18:55.720872 | TASK [fetch-output : Set log path for single node] 2025-06-05 20:18:55.761602 | orchestrator | ok 2025-06-05 20:18:55.768431 | 2025-06-05 20:18:55.768558 | LOOP [fetch-output : Ensure local output dirs] 2025-06-05 20:18:56.325619 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/logs" 2025-06-05 20:18:56.619390 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/artifacts" 2025-06-05 20:18:56.924163 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/110d1ba26f2f4a0a94540b539119b677/work/docs" 2025-06-05 20:18:56.942399 | 2025-06-05 20:18:56.942632 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-05 20:18:57.892579 | orchestrator | changed: .d..t...... ./ 2025-06-05 20:18:57.892878 | orchestrator | changed: All items complete 2025-06-05 20:18:57.892921 | 2025-06-05 20:18:58.674038 | orchestrator | changed: .d..t...... ./ 2025-06-05 20:18:59.456669 | orchestrator | changed: .d..t...... ./ 2025-06-05 20:18:59.486510 | 2025-06-05 20:18:59.486666 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-05 20:18:59.515056 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:59.521517 | orchestrator | skipping: Conditional result was False 2025-06-05 20:18:59.546612 | 2025-06-05 20:18:59.546968 | PLAY RECAP 2025-06-05 20:18:59.547064 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-05 20:18:59.547109 | 2025-06-05 20:18:59.710152 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-05 20:18:59.711309 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-05 20:19:00.460921 | 2025-06-05 20:19:00.461094 | PLAY [Base post] 2025-06-05 20:19:00.476308 | 2025-06-05 20:19:00.476465 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-05 20:19:01.484099 | orchestrator | changed 2025-06-05 20:19:01.491434 | 2025-06-05 20:19:01.491555 | PLAY RECAP 2025-06-05 20:19:01.491620 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-05 20:19:01.491680 | 2025-06-05 20:19:01.618438 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-05 20:19:01.620691 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-05 20:19:02.409138 | 2025-06-05 20:19:02.409311 | PLAY [Base post-logs] 2025-06-05 20:19:02.420269 | 2025-06-05 20:19:02.420409 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-05 20:19:02.896256 | localhost | changed 2025-06-05 20:19:02.913310 | 2025-06-05 20:19:02.913487 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-05 20:19:02.952094 | localhost | ok 2025-06-05 20:19:02.959136 | 2025-06-05 20:19:02.959313 | TASK [Set zuul-log-path fact] 2025-06-05 20:19:02.978165 | localhost | ok 2025-06-05 20:19:02.992330 | 2025-06-05 20:19:02.992503 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-05 20:19:03.021157 | localhost | ok 2025-06-05 20:19:03.028130 | 2025-06-05 20:19:03.028323 | TASK [upload-logs : Create log directories] 2025-06-05 20:19:03.538174 | localhost | changed 2025-06-05 20:19:03.543613 | 2025-06-05 20:19:03.543859 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-05 20:19:04.069895 | localhost -> localhost | ok: Runtime: 0:00:00.007751 2025-06-05 20:19:04.075299 | 2025-06-05 20:19:04.075435 | TASK [upload-logs : Upload logs to log server] 2025-06-05 20:19:04.639947 | localhost | Output suppressed because no_log was given 2025-06-05 20:19:04.641937 | 2025-06-05 20:19:04.642045 | LOOP [upload-logs : Compress console log and json output] 2025-06-05 20:19:04.699457 | localhost | skipping: Conditional result was False 2025-06-05 20:19:04.704223 | localhost | skipping: Conditional result was False 2025-06-05 20:19:04.718379 | 2025-06-05 20:19:04.718610 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-05 20:19:04.769015 | localhost | skipping: Conditional result was False 2025-06-05 20:19:04.769855 | 2025-06-05 20:19:04.772460 | localhost | skipping: Conditional result was False 2025-06-05 20:19:04.780195 | 2025-06-05 20:19:04.780309 | LOOP [upload-logs : Upload console log and json output]